Do You Have to Share Private Details with Smart Machines?
Do You Have to Share Private Details with Smart Machines?
Key Takeaways
- AI lacks sentience and only mimics human understanding, relying on data sets for training.
- Developers need user data to improve AI, but ensuring privacy and trust is essential.
- Users should follow standard internet safety rules when interacting with AI to protect personal information.
As AI becomes more common and concerns grow about how user data is protected, it’s important to understand why this is different than old internet security concerns of the last 20 years. Surprisingly, the solution might be to use the same old internet safety rules we’ve always used.
What Personal Data?
We assume that security technology has developed enough by now that AI developers will keep our personal information like names, email, and payment information secure. However, more and more companies with at least some generative AI features are adding free use of user prompts and AI responses to their terms and conditions.
This means that a human might be reviewing that data. If you’re using ChatGPT to do some vacation planning , that may not be a problem. If you are asking GPT to debug some proprietary code or edit confidential contracts on Adobe, it could be a problem.
How Generative AI Is Trained
As an AI trainer, hundreds of other trainers and I wrote AI prompts and the corresponding responses to contribute to a massive data set that would train LLMs how to respond. Most AI are trained with data sets like the ones I work on, and they contribute to the model differently than the natural language processing algorithms that form the framework of most generative AI models. How data sets are created is proprietary because data sets are another way to make an AI model unique.
Why Developers Need User Data
While AI tech in some form has existed since 1955 , the technology in its current form is new. Companies are trying to monetize this new technology by ordering a new AI or a custom AI for their software. There is little to no data on how ordinary people use AI, especially when many ordinary people don’t use it at all . How can AI improve computer programs and apps? How can it make things easier? One common complaint about AI is that it gets shoved into everyday apps or websites where it isn’t wanted or needed. This can result in confusion or a drop in efficiency for users of those apps and sites. The old adage, “If it ain’t broke, don’t fix it” springs to mind!
One way developers and their clients can resolve this issue is by employing humans to read end-user prompts, categorize how the AI was used, and evaluate how well the AI performed. Developers, marketing experts, and social scientists need to know how we use AI because that will tell them how AI can help people instead of irritating them. The only way forward is for them to study end-user prompts. That is, the information you’ve provided to the AI service while using it. The good news is that most AI apps allow you to opt out of data collection. They need user data, and quality matters, but most of the time one user’s data is just as useful as another.
How Developers Might Protect Our Privacy
Most developers claim that to improve AI while protecting user privacy, prompts are always separated from the accounts that wrote them. If that’s true, there would be no way for humans to evaluate the data to tie the prompt to a specific person. So far, no major AI company has announced any kind of security breach regarding user data, but the potential is there. Unfortunately, unless a company gets hacked, we won’t know how safe they are being with our data. Another option commonly available on AI apps like Google Gemini, Facebook/Instagram , and even Adobe are allowing users to opt out of sharing their data. Again, this requires the end user to trust that the developer isn’t going to use the data anyway. Since there is no way for us to know how secure our data is with these companies, we should do what we can to act responsibly with our data.
How We Can Protect Ourselves?
In a way, end-user prompts are our feedback to the developer about AI. Any AI that no one uses will reveal itself to be unnecessary with this kind of data. However, until some level of trust has been established, it’s best to follow standard internet safety rules when writing prompts. Do not enter any personal details into an AI like your name, address, or ID numbers. Assume that everything you type into ChatGPT, chatbot, or image generator is going to be on the open internet for all time. Until developers have earned our trust, the practice should be to manage our data ourselves as much as possible.
- Title: Do You Have to Share Private Details with Smart Machines?
- Author: Christopher
- Created at : 2024-08-30 21:26:05
- Updated at : 2024-08-31 21:26:05
- Link: https://some-approaches.techidaily.com/do-you-have-to-share-private-details-with-smart-machines/
- License: This work is licensed under CC BY-NC-SA 4.0.