Công Nghệ Thông Tin - Information Technology
  • Trang Chủ
  • Tin Tức
  • Thủ Thuật Máy Tính
  • OS
    • Linux
    • Windows 11
    • Windows 10
  • Website
    • WordPress
  • Network
  • Liên Hệ
Reading: Three problems with generative AI have yet to be addressed
Share
[language-switcher]
Công Nghệ Thông Tin - Information TechnologyCông Nghệ Thông Tin - Information Technology
Font ResizerAa
Search
  • Home
    • Home 1
    • Home 2
    • Home 3
    • Home 4
    • Home 5
  • Demos
  • Categories
  • Bookmarks
  • More Foxiz
    • Sitemap
Have an existing account? Sign In
Follow US
Công Nghệ Thông Tin - Information Technology > Blog > Tin Tức > Three problems with generative AI have yet to be addressed
Tin Tức

Three problems with generative AI have yet to be addressed

hoidabunko
Last updated: 2023/11/07 at 11:54 AM
hoidabunko Published November 7, 2023
Share
A network of linked question marks.
SHARE

Generative AI continues to take the world by storm, but there are growing concerns this technology could, if not aggressively managed and regulated, do a great deal of harm. In addition to fears about the technology making decisions and doing things autonomously against our interests, other concerning aspects involve the related training sets.

These training sets will increasingly capture everything an employee does, that data can be used to assess employee productivity; track the creation of confidential documents, offerings and products; and eventually be used to create a digital twin of the organization that has employed the technology. 

Let’s talk about each in turn.

Misusing training sets to gauge employee ‘productivity’

As employees increasingly use generative AI, it will capture everything they do. Using that data to monitor what an employee does during the workday would seem an obvious use for this training data. But employees will likely feel their privacy is being violated. And if care isn’t taken to tie worker behavior to results, companies could make bad decisions.

For instance, an employee who works long hours but is relatively inefficient might be seen as better than an employee who works short hours but is highly efficient. If the focus is on hours worked instead of results, not only will the training set favor behavior that is inefficient, but efficient employees that should be kept on board will be managed out.  

The right way is to do this is with the permission of the employee and the assurance that AI will be used to enhance, not replace; the focus should be on efficiency, not outright hours worked. This way, the training set can be used to create more efficient human tools and digital twins and train employees on how to be more efficient.

Employees who know AI-based tools will be more helpful than punitive are more likely to embrace the technology.

Security is a must

There is the potential for another danger: the data sets being created by capturing employee behavior could themselves be highly risky. That’s because they could include highly proprietary products, processes and internal operations that could be used by competitors, governments, and hostile actors to gain insights about a firm’s operations. 

Access to a training set from an engineer, engineering manager, or executive could provide deep insights into how they make decisions, what decisions they’ve made, plans for future products and their status, problems within the company — and secrets a company would prefer to remain secret. 

Even if a specific source is hidden, a smart researcher could, just from the nature and detail of the content, determine who contributed it and what the employee does in substantial detail. That information could be highly beneficial to a hostile actor or corporate rival and needs to be protected. Since these tools enhance individual employees’ work, the likelihood of it escaping with a departing employee or one that is compromised in their home office is high. 

Protecting against that is critical to the continued operations of a company.

It gets better — and worse

Once you aggregate training sets across a company, you could gain insights about the firm’s operations that could lead to a far more efficient and profitable company. (Of course, this same information in the hands of a regulator or hostile attorney could provide nearly unimpeachable evidence of wrongdoing.) Or imagine a competitor gaining access to this kind of information; they could effectively create a digital clone of the firm — and use it to better anticipate and more aggressively respond to competitive actions by the company using generative AI. 

This level of competitive exposure is unprecedented and, should a competitor gain access to the firm’s training files, a rival could effectively push the compromised company out of business. 

Generative AI is a real game-changer, but it comes with risks. We know it’s not yet mature, we know its answers can’t always be trusted, and we know it can be used to create avatars designed to fool us into buying things we don’t seem to need. And while it brings opportunities to help employee productivity, it can become a massive security risk.

Here’s hoping you and your company learn how to use it right. 

hoidabunko November 7, 2023 November 7, 2023
Share This Article
Facebook Twitter Email Print
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Cry0
Embarrass0
Joy0
Shy0
Surprise0
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Facebook Like
Twitter Follow
Pinterest Pin
Youtube Subscribe

LATEST NEWS

Microsoft Windows AI Event

The AI future of Windows is here

hoidabunko hoidabunko November 6, 2023
Microsoft: 365 Copilot chatbot is the AI-based future of work
Building a digital clone or assistant with generative AI — good idea or bad?
Đánh giá Zyxel SCR 50AXE
Remove the Windows protected your PC Tech Support Scam
Công Nghệ Thông Tin - Information Technology
Go to mobile version
Welcome Back!

Sign in to your account

Lost your password?