Algorithms are everywhere. Here’s why you should worry

An algorithm is a set of rules or steps, often followed by a computer, to produce a result. And algorithms are not just on our phones: they are used in all kinds of processes, on- and offline, from helping to value your home to teaching your robotic vacuum cleaner to steer clear of your dog’s feces. Over the years, they have increasingly been entrusted with life-changing decisions, such as helping decide who should be arrested, who should be released from prison before a trial date, and who should be approved for a home loan.
In recent weeks, there has been a re-examination of algorithms, including how technology companies need to change the way they use them. This stems both from concerns raised in hearings with Facebook whistleblower Frances Haugen and from bipartisan legislation introduced in Parliament (an accompanying bill had previously been resubmitted in the Senate). The legislation would force large technology companies to give users access to a version of their platforms where what they see is not shaped by algorithms. These developments highlight the growing awareness of the central role that algorithms play in our society.

“At this point, they are responsible for making decisions about pretty much every aspect of our lives,” said Chris Gilliard, a visiting researcher at Harvard Kennedy School’s Shorenstein Center for Media, Politics, and Public Policy.

Yet the way algorithms work and the conclusions they reach can be mysterious, especially as the use of artificial intelligence techniques makes them increasingly complex. Their results are not always understood or accurate – and the consequences can be catastrophic. And the impact of potential new legislation to limit the impact of algorithms on our lives is still unclear.

Algorithms, explained

At its most basic, an algorithm is a series of instructions. As Sasha Luccioni, a researcher on the ethics AI team at AI model builder Hugging Face, pointed out, it can be hard-coded with fixed instructions for a computer to follow, such as putting a list of names in alphabetical order. Simple algorithms have been used for computer-based decision making for decades.

Today, algorithms help ease otherwise complicated processes all the time, whether we know it or not. When you direct a clothing site to filter pajamas to see the most popular or cheapest options, you are essentially using an algorithm to say, “Hey, Old Navy, go through the steps to show me the cheapest jammies.”

All sorts of things can be algorithms, and they are not limited to computers: a recipe is, for example, a kind of algorithm, just like the everyday morning routine that you sleepily meddle through before leaving the house.

“We run on our own personal algorithms every day,” said Jevan Hutson, a data protection lawyer at Hintze Law in Seattle who has studied artificial intelligence and surveillance.

But while we can question our own decisions, those made by machines have become more and more enigmatic. This is due to the emergence of a form of AI known as deep learning, which is modeled on the way neurons work in the brain and won over about a decade ago.
How AI came to govern our lives over the last decade
A deep-learning algorithm can require a computer to watch thousands of videos of cats, for example, to learn to identify what a cat looks like. (It was a big deal when Google figured out how to do this reliably in 2012.) The result of this process of bingeing on data and improving over time would basically be a computer-generated procedure for how the computer will identify if there is a cat in all the new pictures it sees. This is often known as a model (although it is sometimes also referred to as an algorithm itself).

These models can be incredibly complex. Facebook, Instagram and Twitter use them to help customize users’ feeds based on each person’s interests and past activity. The models can also be based on piles of data collected over many years that no human could sort through at all. For example, Zillow has used its trademarked, machine-learning-assisted “Zestimate” to estimate the value of homes since 2006, taking into account tax and property records, homeowner details such as adding a bathroom and photos of a house.

The risk of relying on algorithms

However, as Zillow’s case shows, it can also go awry to transfer decision making to algorithmic systems in intolerable ways, and it is not always clear why.

Zillow recently decided to close its home-flipping business, Zillow Offers, and shows how difficult it is to use AI to value real estate. In February, the company had said its “Zestimate” would represent an initial cash offer from the company to buy the property through its house flipping business; in november, the company made a stock write-down of $ 304 million, which it owed for having recently purchased homes at prices higher than it thinks it can sell them.

Elsewhere on the web, Meta, the company formerly known as Facebook, has been researched to adjust its algorithms in a way that helped encourage more negative content on the world’s largest social network.

Zillow's home buying debacle shows how difficult it is to use AI to value real estate
There have also been life-changing consequences of algorithms, especially in the hands of the police. We know, for example, that several black men have at least been wrongfully arrested for the use of face recognition systems.

There is often little more than a basic explanation from technology companies of how their algorithmic systems work and what they are used for. In addition, technology and technology law experts told CNN Business that even those who build these systems do not always know why they reach their conclusions – which is one reason why they are often referred to as “black boxes”.

“Computer scientists, computer scientists, at this stage, they act like wizards to a lot of people because we do not understand what it is they are doing,” Gilliard said. “And we think they always do, and that’s not always the case.”

Popping filter bubbles

The United States does not have federal rules for how companies may or may not use algorithms in general, or those that utilize AI in particular. (Some states and cities have adopted their own rules, which tend to address facial recognition software or biometrics more generally.)

But Congress is currently considering legislation called the Filter Bubble Transparency Act, which, if enacted, will force large Internet companies like Google, Meta, TikTok and others to “allow users to engage with a platform without being manipulated by algorithms. driven by user-specific data “.
The Netflix building on Sunset Boulevard is pictured on October 20, 2021 in Los Angeles.
In a recent post from CNN Opinion, Republican Senator John Thune described the legislation he co-sponsored as “a bill that would essentially create a light switch to big tech’s secret algorithms – artificial intelligence (AI) designed to shape and manipulate users’ experiences – and give consumers the choice to turn it on or off. ”

Facebook has e.g. already this, although users are effectively deterred from turning the so-called switch permanently. A fairly well-hidden “Recent” button will show you posts in reverse chronological order, but your Facebook news feed will return to its original, highly-moderated state when you leave the site or close the app. Meta stopped offering such an option on Instagram, which it also owns, in 2016.

Hutson noted that while the Filter Bubble Transparency Act clearly focuses on major social platforms, it will inevitably affect others such as Spotify and Netflix, which are deeply dependent on algorithmically driven curation. If it goes through, he said, it will “fundamentally change” the business model for companies that are built solely around algorithmic curation – a feature that he assumes many users value in certain contexts.

“This will affect organizations far beyond those who are in the spotlight,” he said.

AI experts argue that the need for more transparency is crucial from companies that make and use algorithms. Luccioni believes that laws of algorithmic transparency are necessary before specific uses and applications of artificial intelligence can be regulated.

“I see things changing, certainly, but there’s a really frustrating delay between what AI is capable of and what it’s legislated for,” Luccioni said.

.

Leave a Comment

Advertise