Back OnLine–and this time I really mean it!

After a long pause in blogging, brought about by the combination of the busiest year of my career and multiple family issues, I’ve finally had time to resume blogging about the joys and terrors of AI and algorithms. Unfortunately, while once I felt I was ahead of the wave, now I feel I am barely keeping ahead of the deluge of new content and conversations about AI and algorithms. Nonetheless, I still think I have much to offer that is not being understood about the legal ramifications of AI. That generates in me both excitement and fear.

Sadly, most professionals still have not grasped these ramifications. Most reporters don’t have a clue about the true ramifications of AI on American society. They write about the perils of ChatGBT and AI-generated social media postings, as if these are the worst America faces from the effects of AI.
Well, perhaps to them these are the worst ramifications. Not only do they risk being replaced by ChatGBT for mundane reporting jobs, like sports and police blotters, but some have experienced the anger of having their names attached as bylines to ChatGBT-generated “news” stories that no human reporter wrote, full of fluff and falsities.

As for my fellow lawyers, I still seek blank stares when I start sharing my concerns about American jurisprudence in the Age of Algorithms. Some lawyers focus is on how ChatGBT can write briefs for them, or more correctly, how ChatGBT could get them disbarred if they rely on it to write briefs for them. Frankly, I feel disdain for such lawyers, since they obviously are more concerned about easily increasing their billable hours, than providing unique and fully researched legal representation that their clients should expect. That leaves the true professionals in the legal profession who want to provide complete and contemporary legal advice, but don’t have a grasp of what their clients will soon face in as a result of AI.

Interestingly, based solely on my discussions with medical practitioners over cocktails, it seems at least a significant number of doctors have started to grasp the implications of AI on their profession. I am based in Cleveland, which is blessed with a concentration of some of the most intelligent and most advanced medical professionals in the world. Doctors in Cleveland seem to understand the benefits and risks of AI to their profession, and are actively discussing how to increase the benefits while ameliorating the risks. Since in my opinion the health industry could benefit more than most segments of the U.S. economy from AI, this is a good omen for American society. Unfortunately, if lawyers and regulators do not catch up to the AI competence of the health profession, we could face—ironically—a very unhealthy economic situation where a profit motive on the part of doctors and health organizations is unchecked by the regulators and lawyers who would otherwise protect the average American from AI-enabled exploitation to increase health professionals’ profit margins.

Posted by Alfred Cowger

Back On-Line

I had to take several months off of my blog website due to some family health issues. None of it was specifically Covid related, but the ability of my mother to get needed medical care was stymied by the fact the medical system, from doctors to hospital rooms to pharmacies, is overwhelmed by Covid cases. Since this blog is not focused on the US healthcare system or the rights of the ignorant to make medical decisions, I will not spend a lot of time on this issue. However, it is obvious those claiming the decision to be unvaccinated is a personal choice that does not affect the rest of society are simply and totally wrong. And it doesn’t take an algorithm to figure that out!

Posted by Alfred Cowger

The Use of First Amendment “Rights” to Suppress First Amendment Rights

A new opinion before the Sixth Circuit, if it stands, has terrifying implications for First Amendment rights in the Age of Algorithms. As early as 1997, the Supreme Court noted that social media sites are the 21st Century Equivalent of the Town Square of yore. Reno v. ACLU, 521 U.S. 844, 868 (1997). The problem is these town squares are owned by private enterprises, and thus do not fall under the First Amendment protections afforded citizens against government censorship.

Now, the Sixth Circuit has issued a ruling finding that the alleged rights of a person who controls a forum trump the rights of any individual who is required to sit in that forum, even if that forum is a public setting. In a decision handed down last week, the Sixth Circuit ruled that a professor at a public college could refuse to identify a student in his class by that’s student’s self-identified gender, on the grounds the professor had First Amendment religion and free speech rights to call that student by whatever gender the professor chose. Meriwether v. Hartop, Case No. 20-3289 (Sixth Cir. 2021), https://www.opn.ca6.uscourts.gov/opinions.pdf/21a0071p-06.pdf. The decision focused solely on the alleged right of the professor to force his personal religious ideology on his entire class. Worse, that professor was empowered, based on his own religious belief, to deny the right of a student in that class to select her own gender. The Court actually held that the college should have accepted the professor’s suggestion that he simply call the student by her last name, so as to deny her gender identification, as if that was a reasonable accommodation. At no point did the Court consider the impact of this ruling on the student herself, or why a publicly employed professor can foist his religious beliefs on his entire class. Apparently, so long as a public employee can claim his religious beliefs support his behavior, he can aim hate speech filled with prejudice, bigotry and bias at any person seeking the public services or benefits the public employee is being paid to offer.

If this ruling stands, it can be the death knell to free speech and religious liberty in the Age of Algorithms. The Court’s ruling means that when an individual is control of a forum, that individual can dictate the religious, moral and ethical beliefs of every person participating in that forum, notwithstanding the personal or profound the beliefs of those participants might be. In the Sixth Circuit case, that forum was public property and the individual controlling the forum was a professor paid with public funds. When the forum is, at best quasi-public, such as Facebook, YouTube or Twitter, and the control over forum postings is in the hand of a private-enterprise algorithm, the result is chillingly clear. Those wishing to espouse beliefs that do not fit the corporate stance on those beliefs, or even just the marketing plan of that corporation, will see their posts blocked by highly efficient (if not necessarily accurate) algorithms. Those in control of a forum will control the speech and religious beliefs of anyone in that forum, including a forum that is a social media site.

Those wishing to express their beliefs have only one remedy, just like the students in that class–they can leave. But, then the options are no options. Just as that small college has only one course on political philosophy, so too is there, for all practical purposes, only one YouTube and one Facebook. If one has to leave the town square of the Age of Algorithms in order to avoid First Amendment suppression, then the need to leave is itself suppression of First Amendment rights. Thus begins the death watch of the First Amendment?

Posted by Alfred Cowger

Interview on “Breaking Banks” podcast

An interview I did on the podcast “Breaking Banks” is now available for streaming here: https://provoke.fm/episode-382-algorithms-and-ai-the-good-the-bad-and-the-myth/

Breaking Banks is “The #1 global fintech radio show and podcast. Every week we explore the personalities, startups, innovators, and industry players driving disruption in financial services; from Incumbents to unicorns, and from the latest cutting edge technology to the people who are using it to help to create a more innovative, inclusive and healthy financial future.”

This interview, for obvious reasons, focused on the impact of AI and algorithms on the fintech industry. However, many of the issues raised would apply equally to other industries adopting algorithm-based products.

Posted by Alfred Cowger

The Limitations that Algorithms and AI Share With the Ford Pinto and Exploding Pressure Cookers

Algorithms and AI are the most exciting and profound product technology to enter society since the perfection of the internal combustion engine (although I would argue that the development of a family-affordable car was far more important). Yet, for all their awe-inspiring complexity, and their ability to make the discoveries humans could only hyphothesize, they are still products. Thus, they can become harmful to humans based on their poor design and/or defective components. In other words, algorithm-based products could be as infamous as the Ford Pinto, which had a gas tank that exploded in rear-end collisions, or lead paint, that continues to cause brain damage to children one hundred years after it was applied to house trim.

Recent news stories have provided clear examples of how readily AI and algorithms can be designed defectively. NBC News reported on an algorithm used to screen rental applicants that confused Hispanic names, and thus denied credit to a Navy veteran with top secret clearance based on criminal convictions of a Mexican drug dealer. https://www.nbcnews.com/tech/tech-news/tenant-screening-software-faces-national-reckoning-n1260975. When a product’s purpose is to collect data, and then accurately and efficiently draw conclusions from that data far quicker than any human, yet that product cannot accurately draw such conclusions, it should be subject to liability for its design defect, just like a car model that cannot be driven safely down the road. Moreover, while that car might horrifically mow down two or three pedestrians on a sidewalk, the rental screening algorithm could permanent damage the credit history of hundreds or thousands of renters. Worse, unlike the car, the renters literally won’t even know what hit them, since the non-transparent nature of all algorithms will shield the defects in that algorithm.

That same news story noted a case going before the Supreme Court involving an individual denied a car loan because a similar named individual was on the US Government’s terrorist watch list. This demonstrates how algorithms and AI share the same exposure to bad components as other more mundane products, such as pressure cookers with bad seals. It takes only one problem with a pressure cooker’s component to create a time bomb in one’s kitchen. However, at least with pressure cookers, the components are limited and identifiable. In the case of algorithms, each bit of data in the ether that is the internet, social media and The Cloud can be erroneous. From that one erroneous data point can spring untold ramifications as algorithms searching through trillions of data bits to make literally millions of decisions can draw the wrong ones from that one bit of wrong data. In the case going to the Supreme Court, it took only the error-filled U.S. terrorist watch list–which has already denied Senators and toddlers the right to board aircraft–to prevent innocent individuals from obtaining credit. Will those same individuals tagged as terrorists eventually be pulled over and shot one dark night by a security guard who has been told a terrorist is driving through a sub-division?

Before businesses, and eventually society itself, introduce artificial intelligence into every aspect of our lives, American jurisprudence must set standards for individuals harmed by defective algorithm-based products to recover for their harm, and prevent that harm from happening again to them, as well as their fellow citizens. Courts will even have to develop new procedural and evidence rules for breaking through the black boxes that shroud algorithm-based products.
Otherwise, the high technology meant to elevate us will literally kill us.
Furthermore, just

Posted by Alfred Cowger

AI’s Threat to American Jurispruence–A Non-Partisan Issue

I have been asked if my belief that AI and Algorithms are a profound threat to Civil Rights, Legal Remedies, and American Jurisprudence (hence the title of my book) is some “libtard” belief, or one that is quite at home with QAnon conspiracies. I personally believe that my fears are completely non-partisan, and create risks for all Americans, regardless of their political beliefs. It may be one of the few issues that everyone can and should still agree upon.

After all, the risk of discrimination and the loss of due process rights from AI and algorithms will cross all political spectrums. If the expectation of privacy dwindles to nothing because everything we do is data-mined, analyzed and marketized, we will all face the consequences, whether we love or hate Trump. If social media algorithms can block posts because their content is offensive to other social media customers, those on the Left and Right could find their access to the Age of Algorithm’s “public square” equally blocked by the feelings of the over-sensitive and intolerant Center.

So, the next time we progressives cheer because the right wing finds its access to mainstream social media blocked, remember that our opinions may be the next target of those algorithms. The risk that government algorithms will deny benefits unfairly, and those same algorithms will be immune from court cases to correct the unfair denials, can and will happen to our friends, loved ones and selves, regardless of who we voted for. In fact, the entire spectrum of political beliefs may collapse under the onslaught of algorithms that cannot be questioned though they are certainly flawed, and do nothing but make the government seem right, regardless of what happens to its citizenry. A government that cannot be challenged, let alone corrected, is the very definition of a tyranny, and if the United States becomes that tyranny, our fight for civil rights and legal remedies as members of the ACLU or the CPAC will seem like mere quaint old-time habits that are as relevant as carriage houses and horse stables in our back yards.

Posted by Alfred Cowger

Defamation Standards in the Age of Algorithms—Should A New Balancing Test be Created?

Like other legal issues surrounding speech on social media, speech that is defamatory is literally moving faster than the legal principles that are meant to address it. On one hand, legislatures long ago enacted statute of limitations based on the date of publication of the defamatory statement. Those statutes limit how long a defamer can be liable for one defamatory “publication”, so the punishment does not unfairly outweigh the action. O
n the other hand, defamers determined to cause repeated disparagement would be liable for “re-publication” of their original statements. This concept of re-publication would protect the defamed person from being harmed again and again by the original defamation being repeated. These two concepts worked well to both protect the defamed party from repeated repetition of the defaming statement, and prevent the defamer from interminable penalties long after the defamer had made the statement.
The problem is that when defamation occurs on social media, the person found to have committed the legal sin of defamation can be ruined by one bad statement, since that statement can be repeated far beyond the intended audience of the defamer, and can literally last forever in the internet. If a continual existence of the defaming statement were continually to trigger a penalty against the guilty defamer, the penalty to the defaming party could far outweigh the harm that was intended. That, in turn, can have a chilling effect on statements that may have had some valuable intent, ranging from criticism of public officials to consumer complaints against private businesses. If any defamation will live forever in the Age of Algorithms, and thus also so will the penalties, will anyone risk saying anything negative against anyone or any entity?
On the other hand, defamation in the Age of Algorithms does not eventually fade away, like the newsprint of yesteryear. A defamatory statement will always be found via search engines, and can be passed literally around the globe by thousands or millions of social media users. Thus, the party defamed could be harmed literally forever and worldwide.
On of the first court decisions to address this issue is Penrose Hill, Limited v. Mabray, Case No. 20-cv-01169-DMR (N.D. CA Aug 18, 2020). This case involved a lawsuit by a winery owner against a wine blog. The court found that a posting that is not removed from a social media site is “published” when it is first posted, and that the mere act of keeping the post up is not republication. Thus, the statute of limitations would begin running on the date of the first posting, but will expire even if the post is still up as of the date the statute of limitations runs.
The court then found that merely referencing that posting in a later tweet is not re-publication. The court based this decision on traditional cases, in which publications containing defamatory statements were merely cited in later publications. The court noted in passing that traditionally a defamatory statement can be deemed re-published if the original statement is cited with the intention of bringing to the attention of a new audience. This passing comment should have been considered more seriously by the court. People posting tweets hope that each time their tweet will indeed reach a new audience, including the hope that the new post will be re-tweeted even more broadly than the first time. Thus, one could argue that a re-tweet of an original defamatory statement should be presumed to be an effort to reach a new audience.
The court went on to find that traditional defamation law does not find re-posting of the same tweet is republication. Therefore, a verbatim reposting of the same statements by the blogger did not trigger a new statute of limitations. The problem, as demonstrated by recent history, is that the best way to spread lies and defamatory speech is to repeat again and again, hoping that it will be re-posted so many times that people will begin to believe the lies simply because they have seen them so often. That would suggest that if modern defamation law is to respond to hatemongers intent to harm individuals with lies, verbatim re-postings should be proscribed just as the original posting. That, in turn, means that the statute of limitations does not begin running until hatemongers stop repeating their lies.
These concerns merely begin the discussion about defamation law in the Age of Algorithms. What is obvious is that courts should not rely on the common law that arose when defamation was limited to publications that were limited by Industrial Age logistics to finite populations and locales, and rarely lasted a decade before they crumbled to dust.

Posted by Alfred Cowger

Snow Days Ain’t What They Used to Be

My daughter was so happy for the foot of snow we received on Monday night, because she was sure that meant a Snow Day. She was horrified when her school announced Tuesday would simply be a day of remote learning, like much of the last several months. That has resulted in an argument about whether Snow Days are themselves Acts of God, because they are after all caused by one, and thus students have a divine right to have the day off from school. I assert, because after all I am a Daa-aad, that Snow Days are an archaic part of pre-internet schooling, and thus serve no purpose when students and staff alike can readily shift into remote learning until the streets are cleared. Are Snow Days yet another part of Ohio life that will be forever altered by Covid?

Posted by Alfred Cowger

What Happens When the U.S. Marketplace of Ideas Is Not Even Located in the U.S.?

Lost in the debate about social media as the “marketplace of ideas” for the Age of Algorithms is the physicality of social media, or more specifically the lack thereof. The difference between the physically located marketplace of ideas for the first 250 years of the United States, and the marketplace of ideas hereafter, will add significant ramifications to the First Amendment that few are discussing now.
There is no doubt that social media is the soap box of the 21st Century. The Supreme Court has already concurred in this obvious argument. Yet, this comparison is weak. The platform of the speech-giver of old was completely physical, whether that platform was a stage or a park corner. One essential element of Free Speech rights was the ability to gain access to a public venue without losing that right just because of what one wants to say. At least the speech giver and the government official intent on preventing the speech both knew where the venue was located where the speech would occur.
The Age of Algorithms has changed that completely. Social media is not dependent on a location, let alone an advantageous one like a popular park corner, and so those exercising their free speech rights can do so literally anywhere in the world. That makes taking government action to prevent hate speech a difficult thing. If Twitter were to decide tomorrow to move its social media network to a Nigerian server tomorrow (just for example), its U.S. users would not see any difference on their computer screens. Yet, any U.S. government regulation of alleged hate crime, or even obvious crimes like sex trafficking, might not be able to reach outside of U.S. borders. So, the ability of a government to respond to the negative effects of communication might be getting more difficult.
Of course, the U.S. government could block Twitter from gaining access to U.S. computers, much like China blocks Google from the computers of its citizens. That would not put an end to Twitter itself, particularly if the sovereign where Twitter was located decided to be more sympathetic to Twitter—and its billions of dollars of revenue.
Perhaps more importantly, the moment the U.S. government decides to block a social media site due to its content, that is censorship of speech based on its content. That, in turn, is almost never permitted under the First Amendment. So, the U.S. government’s only tool to respond to harmful social media would actually result in the protection of that harmful social media in almost all cases.
On the other hand, consider what happens if the U.S. government tries to take steps to prevent censorship by social media’s owners? Well, those owners could simply move their social media servers to a more friendly jurisdiction, meaning one that values tax revenues and employment of its citizens over free speech in the United States.
This is yet another reason that the United States must come up with an alternative to social media, at least in its present form, as the marketplace of ideas for the Age of Algorithms. There is so much that can and will go wrong if social media is the primary means by which ideas and discourse are disseminated. Ultimately, both those the individual wanting to speak who is kicked offline and the government trying to prevent illegal communications via social media will be frustrated in their efforts, and the only winner will be the social media owner, generated revenues in a completely unregulated location.

Posted by Alfred Cowger

Free Speech Should Not be Subject to Private Regulation

Before we rejoice too much that the treasonous and hate-filled messages of Trump and the Proud Boys has been silenced, consider the implications. The Supreme Court has already recognized that social media platforms are the 21st Century equivalent of soap boxes at the corner of public parks. Without access to these platforms, it is almost impossible to communicate one’s ideas and discuss those ideas with likeminded people in this new Age of Algorithms. That means our Freedom of Speech rights are in the hands of private entities, and thus our speech rights have no judicial protection.

To make matters worse, the tools used by these private entities to censor speech are a combination of algorithms and human decisions. Those algorithms have already proven to be woefully inadequate to stop the spread of falsehoods, while at the same time blocking some posts arbitrarily and unequally. Many innocent posts containing certain words or phrases have resulted in people being sent to “Facebook hell” by algorithms. The social media owners admit that the algorithms will make mistakes, but during the Covid pandemic, the humans who might veto algorithmic censoring decisions are not in the office to catch these errors.
Even when those humans are present, there is no guarantee that the censoring will be objective. After all the decision makers are human, who at times will simply adopt the “I know it when I see it’ approach to meting out punishment for “bad speech”. Moreover, both the humans that design the algorithms and the humans that review the algorithms’ censorship can have their own biases for or against certain posts. “Boys will be boys…” is still a standard by which offensive postings are evaluated.

Our First Amendment rights deserve more protection than what private enterprises promises, let alone delivers.

Posted by Alfred Cowger