ChatGPT is exciting, but Microsoft’s influence is worrying

The dream of AI has made its way into our daily lives, and ethical discussions about AI have increased as a result, particularly with regard to the amount of data these AI services collect from users. After all, when there is massive storage of potentially sensitive information, there are cybersecurity and privacy concerns.

Microsoft’s Bing search engine, newly equipped with OpenAI’s ChatGPT, which is now being rolled out, raised its own set of concerns, as Microsoft didn’t have the best track record in terms of respecting its customers’ privacy.

Microsoft has occasionally faced challenges over its management and access to user data, though far less than contemporaries like Apple, Google and Facebook, even though it handles a significant amount of user information — including when it sells targeted ads.

It has been targeted by some government regulatory bodies and organisations, such as when France asked Microsoft to stop tracking users through Windows 10, and the company responded with a set of sweeping measures.

Jennifer King, director of consumer privacy at the Center for Internet and Society at Stanford Law School, speculated that this is due in part to Microsoft’s longstanding position in both its own market and the longstanding relationships with governments accorded to it because of its heritage. It has more experience when dealing with regulators, so it may have avoided the same level of scrutiny as its competitors.

data impact

Microsoft, like other companies, now finds itself having to respond to the mass influx of user chat data due to the popularity of chatbots like ChatGPT. According to the telegraphAnd Microsoft has reviewers who analyze user submissions To limit harm and respond to potentially dangerous user input by combining user conversation logs with chatbots and intervening to moderate “inappropriate behaviour”.

The company claims that it strips submissions of personal information, users’ chat transcripts are only accessible to certain reviewers, and these efforts protect users even when their conversations with the chatbot are under review.

A Microsoft spokesperson explained that it uses both automated review efforts (because there is a large amount of data to comb through) and manual reviewers. This is the standard for search engines, he adds, and is also included in Microsoft’s privacy statement.

The spokesperson takes pains to assure stakeholders that Microsoft uses standard user privacy standards such as “anonymity, encryption at rest, secure and reliable data access management, and data retention procedures.”

In addition, reviewers can only view user data on the basis of “only the need of the verified business, not any third parties.” Microsoft has since updated its privacy statement to summarize and clarify the above – user information is collected and may be seen by human employees at Microsoft.

under the lights

Microsoft isn’t the only company under scrutiny about how it collects and processes user data when it comes to intelligent chatbots. OpenAI, the company that created ChatGPT, has revealed that it audits user conversations.

Recently, the company behind Snapchat announced that it is introducing a ChatGPT-enabled chatbot that will resemble the already familiar messenger chat format. It has warned users not to submit sensitive personal information, possibly for similar reasons.

These concerns are multiplied when considering the use of ChatGPT and ChatGPT-equipped bots by those who work for companies with sensitive and confidential information, many of which have warned employees not to send confidential company information to these chatbots. Some companies, such as JP Morgan and Amazon, have restricted or banned their use in business together.

Personal user data has been, and continues to be, a major issue in technology in general. Data misuse, or even malicious use of data, can have devastating consequences for both individuals and organizations. With each introduction of a new technology, those risks increase — but so does the potential reward.

Tech companies better pay more attention to making sure our personal data is as secure as possible – or they lose the trust of their customers and potentially kill their nascent AI ambitions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version