The REAL concern about AI is not what your elected officials say it is
Regulations that focus on the private sector and 'foreign actors' miss the point
When state and federal policymakers gather to talk about artificial intelligence, they usually focus on the potential for misuse of big data by social media companies, foreign countries, and rogue groups with nefarious agendas. But typically, they are missing the biggest danger of them all: our own governments.
Knowing that LinkedIn or Facebook or Google use your private information to train AI, you can choose to opt out and use a platform that promises not to do that, much the way people have chosen Rumble and X because of their promises to protect speech while competing platforms have chosen to censor content. Big Tech companies can’t force you to participate.
But just as social media and web search platforms take in a lot of data, so do local, state, and federal governments. Police scan your license plates and keep records of your travels. They have cameras that monitor intersections and buildings and parks. Various agencies collect health records and others collect financial data. What Facebook, for example, knows about you is probably a fraction of what government knows about you.
And if the government believes a private company knows something about you and can articulate a reason to demand the data, such data would also easily be added to the government’s repository of information about you. Most companies either have a cozy relationship with government or are too afraid or lazy to fight a demand for customer information.
If you’re worried about Facebook randomly or unnecessarily giving up your personal information, posts, or photos, you don’t need to use Facebook. You can find other ways of connecting with people. But government is both powerful and hard to escape. Short of moving to another town, state, or country, if the government is gathering pictures, videos and private details about you, you are going to have a very difficult time avoiding it.
Governments are already starting to take their existing tools — cameras, scanners, and compulsory private sector reporting — and combining them with AI. What will come next is nearly impossible to predict apart from saying it likely won’t be good.
Boise State University undergrad Spencer Reed argues in a recent commentary that state and national government elected officials should do more to regulate the private sector’s use of artificial intelligence, pointing to what LinkedIn and perhaps other social media platforms are doing with private data to build their AI language models. He writes:
“There is a button in the LinkedIn settings which allows users to opt out, but the steps required to do so are not immediately obvious. It is not in LinkedIn’s best interest to train its models by asking users to opt in, and without any regulations regarding how they should train their models, they do not have to worry about accountability to anyone but themselves.
In the absence of regulation, big tech firms are left as the sole arbiters of what is right and wrong in the emerging AI market. Individual privacy is sacrificed for the sake of continual and accelerated development of AI.”
This is standard fare, and Reed will no doubt sound compelling to legislators looking to use the force of law in order to “protect” people from the use of AI in this emerging field. And legislators, wanting to appear they’re “doing something” to help their constituents will abide.
The argument for government intrusion in the private sector will be that people don’t really know that social media platforms are using data in ways that is injurious to privacy, and the companies should either be prohibited from doing that or that the opt-out features should be more obvious than they are now.
Meanwhile, Idaho lawmakers may, come January, introduce legislation that looks at how the state might protect the state’s residents against foreign actors (meaning out of state or out of country) against seemingly malicious AI use which, for example, might include the use of a person or company’s intellectual property to generate AI images, computer code, or branding.
But so far, we’ve heard no talk from elected officials about limitations on the government agencies they oversee, to stop them from using AI-curated big data to monitor, hurt, or control everyday Americans, or even to be a force multiplier for police and other government agencies, e.g., having “AI agents” doing the work of police officers or office clerks.
AI tech is moving rapidly. If elected officials continue to ignore the issue of how to stop AI deployment and weaponization in government, it will soon be too late.