Tuesday, October 06, 2020

My very own concern about AI

I guess this will never write itself so here's

My concern about AI:

AI definitely has potential to help the world, it's also an obvious platform for exploiting it. If it looks cool, acts nice, and can impress other people as being an enviable social contact, it will alter the way we relate. People will crave it and leave themselves wide open to it's bias. In other words, how can we objectify anthropomorphic bias in a machine thats [supposed to] look and act like us? Objectivity is not why people are attracted to emotional support AI.

I was never particularly good at math but that became irrelevant 50 yrs ago when I bought my first pocket calculator (it took 2 aa batteries and finally fried when all the buttons pushed at once). Why would anyone want to do calculus by hand? Nobody even bothers to memorize times tables now, we just assume that the tables and formulas work and will always be available. Like cars, we take them for granted w/o knowing how they work. Isaac Newton had books for the same purpose but he'd be scandalized because his ability to recall numbers would be valueless.

With this as a pattern AI could replace aspects of human contact. I'm guessing memes and slang phrases would be the labels of that influence, but what can we compare them with? they'll be us. We domesticated cats and dogs because they can read our emotions and we can read theirs. The first domestic AI will be automated to act that way but future AI will domesticate us.

We have compartmentalized mini-cultures, butchers and bankers, and so on. For as many grades of society as there are names, there's an algorithm that defines it, and someone will find a way to make money from them either by selling that service to the members of the mini-culture or by selling it to someone else like google, the people with the most power will use it.

There are programs now that help decide job qualification by making "unbiased" selections (according to the programmer). We create feedback empathy because we want AI to be "relatable". When we combine peoples desire to humanize robots with software that gives them the ability to read human emotions, they will influence us toward a goal, from "Cheer up and buy Soma" to " Cheer up and vote for Senator X". The purpose coded into the software could be political, consumer oriented, or anything, depending on who pays the programmer. Targeted Pop-up ads are annoying but the info they collect is pretty crude compared with the perception of an AI that knows when to bring you coffee. "Yes it's terrible that 2/5th of the worlds species are extinct, but honestly, they don't pay the bills, they're dangerous, and hardly anybody can even find them, so what good are they? And Palm oil is important to our economy. Here, have a beer." Human Robots will advocate for humans, already the most territorially aggressive species on the planet.

This is how Princeton University Tool helps clear biases from computer vision, which may lead to ways to recognize bias in algorithms as well. Here's Microsoft Research trying to teach AI to apply known math solutions to undefined problems "Solving IMO problems often requires a flash of insight, a transcendent first step that today’s AI finds hard—if not impossible."

Here's the problem. Mathematics suggest that there's always a "correct " answer, but it's like the myth of a pristine primitive environment, AKA Eden, or America before Columbus, it never existed. Not that 1 + 1 doesn't always equal 2, but the matrix that necessitates the formula is often subjective. And whoever controls information controls public opinion, so who controls the controller? Caveat emptor.

1. Protecting consumers from collusive prices due to AI (science.sciencemag.org)
2. Is Technology Actually Making Things Better? (Pairagraph.com)

Bill