Mon, Jul

A.I. – It’s a Tool Not Magic, and Abuses Must Be Stringently Regulated



ACCORDING TO LIZ - Artificial Intelligence or A.I. is not really artificial but just the next level of computing - a tool not magic. Some extoll its benefits but more often than not, I’ve heard of abuses.\ 

Like cryptocurrency and the gig economy, A.I. is not complicated Wizard-of-Oz time but it does have a far greater potential to disrupt our lives, to further divide economic disparities with the fruit of its successes going to the miniscule sector that sits at the apex of our society while these tools toss everybody else into a vacuum outside the spaceship of this glittering future. 

This is not a fringe issue anymore. It has exploded at lightning speed from science fiction curiosity to being foisted on every American to a greater or lesser extent – in their workplace, in their purchasing, and in their children’s education. 

Funding for A.I. firms made up nearly half the $56 billion in U.S. start-up financing from April to June this year. 

A.I. can be incredibly beneficial – in expediting repetitive functions especially in areas such as health where time is a life and death matter – in exponentially developing new drugs, providing services for which people are clamoring… and pushing up profits for corporations. 

In view of the dizzying speed of its integration into the mainstream, we must all understand that computing is only as good – as accurate and unbiased – as its programming. 

In an effort to give this new iteration of computer-assisted expansion of knowledge free rein, its creators have unfortunately unleashed the dogs of double-speak. 

Purveyors promote the benefits with reckless disregard for examining and protecting consumers from the downside. 

By not giving it adequate parameters to determine what is right and what is wrong, what is truth and what is fiction, what is fact and what is opinion A.I. will only aggravate existing problems in the world today by amplifying the lies, the misconceptions and the half-truths. 

The question now is how to curb A.I., impose boundaries on issues of veracity. How to make its processes accountable, how to ensure accurate differentiation between truth and lies, hyperbole and opinion, subjectivity and partial truths that can be monitored and that everyone can trust. 

How to ensure transparency to address dangers of data loop feedback concerns, and the intersection between profit-making and social norms and people’s rights. 

The explosive magnitude of A.I.’s arrival in our lives means it has gone beyond the option of self-regulation and must be addressed through government policies, here and around the world and in ways to ensure that legitimate A.I. systems cannot be hacked or hijacked by bots. 

So much of our lives are subject to impenetrable algorithms for expedited decision-making with no recourse when banks deny mortgages, car dealerships deny loans, universities turn down applicants, and employment offers are not made. 

The there is the macro-damage wrought when banks and financial systems fail, run over by the speed of A.I.-accelerated algorithmic decision-making. Think sub-prime mortgages. 

It’s a frightening systemic issue when correct or not, these systems have the power and ability to crash the world economy. 

Further magnifying the danger is the increasing dependence on cloud-computing brought to us by the folks now funding and flogging A.I. 

Who are you going to call to put it back again? Those same systems? 

And the forgoing is from a white hat perspective. 

For black-hat hackers savoring their power, with a click they can weaponize those algorithms… 

So, all levels of government in every country must wake up to the need to act now, and the sooner the better. 

In terms of A.I.’s broader impact, 2023 brought it front and center as a focus of the WGA labor negotiations. Their negotiators knew the genie could not be stuffed back in the bottle once corporate employers got dollar signs in their eyes, but the writers struck for months demanding at a bare minimum, transparency and consultation. 

And in the process brought the dangers of A.I. out of the back room and onto the front pages and lead stories across the country. 

Companies and management were busily working behind-the-scenes to promote A.I. solutions that would allow them to reduce and replace costly workers and their grandfathered protections from the workforce. Now Great Oz has had his curtain drawn back. 

The easy-peasy one-click solution for executives obscures an important point: even A.I solutions are based on someone doing the work that generates their end products. 

Someone does the research, someone builds the goods, someone maintains the systems that track consumer demands. Someone makes the decisions to ensure those workers are paid as little as possible so the executives can maximize company profits and their pay. 

Workers in the United States want more pay and better benefits? Build in Bangladesh. Salespeople in American stores want bathroom breaks and might bitch to customers? Shutter brick-and-mortar stores and sell on the internet. 

More clicks, more money with less understanding of how the supply chain steals jobs and contributes to climate change. 

More clicks. With most consumers unaware of how those clicks dovetail into the monetization of data, leaving them vulnerable to psychological manipulation by corporate grandees. Giving rise to a corporate elite more effectively manipulating the courts and elected officials. 

Creating large pools of poorly protected exploitable data collections posing an existential danger for scamming, stalking, tracking, defrauding, and extortion of the innocent by those who would prey on the defenseless. Defenseless because A.I. has vastly amplified their vulnerabilities. 

Additionally, the datasets on which A.I. decision-making depends are flawed from the biases and inaccuracies inherent in their input and will further magnify and reinforce existing discrimination. 

Not to mention that too often it is arms of our own government in bed with those exacerbating the problem: many recent data-breaches have been traced to the CIA, NSA and their alphabet brethren as they boldly go where no man could go before and seek out new ways to invade Americans’ privacy in the name of… civilization? Big Brother on steroids. 

U.S. culture and language blames the victim even, and especially, if they were victimized through no fault of their own. Civil rights and liberties get tossed to the wayside in the name of military-industrial complex security (grossly expanded because of its inadequacies in the years leading up to 9/11), and corporate greed. 

This, despite numerous instances of case law upheld by the courts for generations that Fourth Amendment protection from unreasonable searches and seizures by the government includes the right to privacy. 

The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and many other countries’ constitutions concur. 

Even before the advent of A.I., availability of personal information online through our digital footprints on social media, in transacting business and pursuing pleasure, in connecting with loved ones and researching what is going on in the world around us, attracted unwanted scrutiny and exposed people to having their identity hacked and their interests misinterpreted or manipulated. 

A.I has made it far more challenging to protect personal information and ensure that people’s privacy, dignity and autonomy are respected, to protect those human rights we hold so dear, to defend democracy itself, and the freedom for individuals to live their lives without fear of being monitored or surveilled. 

A.I. now allows people’s own voices and faces to be weaponized against them and used to destroy their lives. 

A.I. is both a money-suck and a power-grab and raises red flags about sociotechnical concerns where technological innovation blur the boundaries between it and IRW experiences. 

We, as Americans, as people everywhere around the globe, are facing an existential danger if the forces that could contain Big Tech continue to protect A.I. from government regulation. 

That black box of data and programming matrixes that A.I. developers and profiteers claim as trade secrets and refuse to reveal is a Pandora’s Box of misaligned values about which human beings need to be very, very careful. 

We-the-people must rise up and force the government to impose full transparency and vigorously enforce coherent systems to analyze and govern the use of A.I., acknowledging that there is no one size-fits-all. The use and application of new laws and regulations will have to vary significantly depending on context. And that’s OK. 

Letting lawyers and the courts to parse new tech from a for-profit point-of-view puts anti-democratic forces into play, allowing whoever can pay the most to leverage whatever benefits them the most. 

Instead, mankind needs tools to expose the risks, and policies to protect quality of life for the vast majority of people. 

Both tools and policies because one cannot exist without the other. Mankind must simultaneously address the broad existential risks at the same time it acts swiftly to limit immediate harms. 

To be clear, A.I. is a tool. It us not the Second Coming. It has no built in omnipotence and, in fact, must be regulated to ensure safety as are all technological developments from aircraft to the zirconium alloys used in nuclear reactors. 

At the moment, competition for research and consumer dollars has created Wild West where anything goes. There are few standards, nothing enforceable, not even a structure through which to work on developing oversight. 

Scholars and scientists, business executives and government contractors, need to develop processes on how to address all the conflicts and concerns that have boiled over with the almost cancerous growth of this new field. 

We need chief A.I. officers in every entity to ensure systemic coherence, but desperately need to avoid the bureaucracy and red tape that has rendered so many regulatory procedures sclerotic. The focus must be flexibility and accountability. 

This especially essential in its weaponization for military applications around the world. Already, in response to the ongoing Russian invasion, Ukrainian companies are adapting A.I.-enhanced consumer technology. And it’s a certainty that the U.S. military is far out in front of that curve. 

There are concerns about components or entire systems being manufactured overseas where governments may be inimical to American interests and could include fantastical Trojan horse viruses. 

A strong start would be to impose restrictions, including mandating data transparency, decision-making protocols, on federal contractors who represent employers of 20% of US workforce and, from there, rapidly expanding outwards both within the United States and through its allies and the United Nations. 

And since almost all technology companies sell items or components to governments, that expansion should occur as far and as fast as A.I.’s rollout in the first place. 

However, this will take time so it’s urgent the government start immediately, prioritizing the must-dos over the should-dos and addressing first what is feasible, the low-hanging fruit, in a roll-out designed to leave nothing out. 

Inspire people to come with solutions, not navel-gaze on the problems. Embrace Google’s initial motto of “Don’t be evil” which still exists as a cultural assumption in Silicon Valley and its ilk. 

And for governments to stand firm against pressure by the industries it must regulate, those powerful companies that claim they are on top of things when ample evidence exists that they aren’t. 

It is the obligation of our elected officials to step in and protect people from potential harm, mitigate dangers before the worst impacts can be felt. 

We need a sea-change in American culture to put corporations firmly under the control of the people, to reimagine all tech as governable and subject to the rule of law and, while continuing to support innovation, ensure that all endeavors are in the best interest of the vast majority of people. 

The goal should be how the people of the United States choose to be in relationship with changes in our future including embedding democratization into a collaborative approach to ongoing policy and development that affects their lives. 

The government must prioritize care for all people, make sure everyone is included and no-one excluded. Spend our money on the expertise necessary to expedite protections, transparency and accountability, system audits that work for everyone, especially vulnerable groups, and give precedence to data privacy and explicitly require the inclusion of an expedited arms-length human component on appeals to machine-based decision making and claims of discrimination. 

And if Washington is a little slow to respond, the people must rise up against its complacency, calling for boycotts of businesses that abuse A.I. and find ways to monetarily affect companies’ profits and the economy to issue a wakeup call that humans have rights, too. 

For people wishing to take a much deeper dive into the issue and paths to solutions already being addressed in D.C.  

Those wanting a broader overview of internet and technology issues can learn more in two very accessible books: Dignity in the  Digital Age: Making Tech Work for All of Us and Progressive Capitalism: How to Make Tech Work for All of Us. Both are by Ro Khanna, Representative from California’s Fremont District (which encompasses Silicon Valley), author of the Internet Bill of Rights developed in the wake of abuses and breaches during the 2016 election, and dark horse candidate for President.

(Liz Amsden is a contributor to CityWatch and an activist from Northeast Los Angeles with opinions on much of what goes on in our lives. She has written extensively on the City's budget and services as well as her many other interests and passions.  In her real life she works on budgets for film and television where fiction can rarely be as strange as the truth of living in today's world.)