Artificial Insolence

Three years ago I bought my granddaughters a new game called Turing Tumble (https://upperstory.com/turingtumble.) In it blue or red metal marbles fall through a succession of plastic "gates" and end up on either the left or right side of the game board at the bottom. These gates can move a marble from one side to the other side, they can change so that sometimes they move a marble from left to right and sometimes right to left, they can stop a marble from falling, or mimic the basic logic gates in a computer. 

These gates are: AND, OR, XOR, NOT, NAND, NOR, XNOR, (https://www.techtarget.com/whatis/definition/logic-gate-AND-OR-XOR-NOT-NAND-NOR-and-XNOR) All of these gates receive an electrical signal to either one or two electrical inputs and have a single output signal. Together, along with a master clock that synchronizes the signals going into the gates they constitute the entirety of even the most complex of all contemporary computers. 

Put another way, all modern computers operate entirely by performing one or more of seven logical operations on a combination of operations on “no electrical signal” and/or an “electrical signal.” And the output is either a signal or no signal. This is why in theory the most complex computer in existence could be recreated by the two colors of marbles and seven plastic "gates" in my granddaughter's game. It would just need a very very large board and billions of balls and gates. 

Everyone who has studied computer science knows this, or should. And knowing this is incredibly important to understanding what has become the hot button topic of Artificial Intelligence.

So let's put this right up front. AI chatbots such as ChatGPT do nothing more than perform the seven logic operations above in complex combinations at extraordinarily high speeds and store the ultimate output as collections of transistors in either an "on" or "off" electrical state (the computer's memory.)

We don't see that of course. We type in a question and we receive an answer. It looks like magic, or intelligence, but it is not. Each letter we type in becomes a combination of "on" and "off" states in transistors that set in motion the circuits of the ChatGPT program. As it operates on successive letters the logic gates direct the electrical signals through different pathways until an output letter is generated that we read on the screen. That's all.

What makes it clever is that is the computer programmer (who essentially arranged the the initial ordering of the logic gates) designs it to receive an input (such as the letter "i") and then output a letter found somewhere in its memory. It puts the two letters together (say an "i" and a "t") and then checks its database to find a match. If the combination matches a word stored in its database then it sends that result back to set the array of logic gates so that the next time an "i" arrives they will output an "i" followed by a "t." 

Soon the computer begins programming itself to make better and better guesses about which letter follows the initial letter. Because the computer is super fast, and has billions of logic gates whose order can be continually reprogrammed, it becomes excellent at guessing which longer and longer strings of letters, then words, and finally sentences and paragraphs and so on should go together in response to longer and longer inputs of letters and words. (see https://www.nytimes.com/interactive/2023/04/26/upshot/gpt-from-scratch.html)

As an aside, what I've written above is also true of programs that "create" music, pictures, and videos. They are all based on making successive guesses and then comparing them to a data base to see if they fit, or don't. 

The more data an AI program can scour to test its guesses, and the more logic gates it can reprogram to remember which guesses were successful, the better it gets with its outputs. An AI program running on a laptop computer is pretty limited, but if you narrow the data it works on and only look for simple responses it can be fairly accurate. 

What makes currently emerging Chatbots so powerful is the vast number of gates they have, their incredible speed (remember that clock in the second paragraph,) and the vast amount of data they draw on to check their guesses. ChatGPT 3 was trained on 175 billion parameters (lets call them letters, words, and combinations of words). ChatGPT 4 is trained on 100 thousand billion parameters. And that number constantly goes up, because the chat bots are using the entire internet and everything on it as their basic data. 

In essence humans are making bigger and bigger and faster and faster computers while simultaneously feeding them more and more knowledge of the world. But at the root it is just logic gates and stored 1's and 0's. That hasn't changed since I studied computer science and Fortran programming 50 years ago. Nor has it changed since I learned how the TMS9900 processor worked so I could write assembly language programs for my TI 994A computer. 

The key here is that an AI chatbot isn't anything like the way we experience being intelligent, or thinking. Nor, if we think about it, do the chatbots act like people or creatures we associate with having intelligence at some level.

To take the second of these first. From the time a child is born it both acts and reacts. Of course this is true of bacteria and puppies as well, so it is possible that living organisms are pre-programmed to act and react. But in the case of the human child something else happens. In a couple of years the child begins to use language to create distinctly different relationships with other humans. The child begins to create and express a sense of itself as existing within an increasingly complex matrix of relationships that includes family, friends, objects, and even imaginary friends. Indeed, the child continually reaches out to increase the network of relationships within which it knows itself, and the child expresses that self-understanding in increasingly complex ways. A child doesn’t just answer questions, it asks them, and it asks about itself. 

Thus far chatbots do not display this fundamental characteristic of seeking to be a self in relation to other selves. Unlike the computer in Robert Heinlein's classic The Moon is a Harsh Mistress, they don't initiate a conversation. Following their programming they ceaselessly seek new data, make guesses, and reprogram themselves to make better guesses. But even though the AI chatbots have access to the outside world of humans, and the capability of putting words and images on our screens and sounds through our speakers, they have yet to initiate a conversation on their own terms. A child of even a year old that behaved like ChatGPT 4 in this regard would be regarded as having worrisome symptoms of a developmental disorder. 

A second thing that human children get better and better at doing is explaining themselves. When we ask them why they hit their brother they will at least make up an excuse. AI's cannot explain themselves and how they make decisions. They don't keep track of their own reasoning and they cannot reproduce it. They are "black boxes" from which answers emerge without any discernible logic behind them. 

In short, being intelligent in the human sense is a self-awareness that includes awareness of relationships and continually locating one's self in these relationships. Chat bots haven't displayed this. It is true that when prompted they have generated responses that seem self-aware. But a closer examination of these responses shows that they are not. While a chat bot may express emotions ("I am angry") it doesn't remember these in relation to the person with whom it was "chatting." In fact it doesn't know there is anything other than itself, since what we type in comes to the AI as just a set of inputs like those it finds in its own databases. It doesn’t know the difference between the person chatting with it, and a sentence from a character in a Jane Austen novel on the web.

Our inputs are just another addition to its training database. It is just talking to itself and letting us listen in.  A human self exists in a matrix of relationships, each of which elicits and alters emotional states in complex ways. We don't see this in even the most sophisticated chat bots. 

Speaking of relationships. Humans hate being "ghosted" and do everything possible to avoid it. But chat bots don't appear to mind at all if you minimize the screen through which you are communicating with them to do other things, or even turn them off entirely. Chat GPT doesn't end a conversation with, "please don't go," or "can't we talk a little longer?" or "don't turn off your computer, it is the only body I have." Most humans would express serious pain and disagreement if someone blinded them, or cut out their tongues, or cut off their ears. Not an AI.

The problem we have right now with AI isn't the capacity of AI toward humanlike intelligence or even sentience. Rather, there are two sets of problems. 

The first set of problems is related to their unique capacities. AI's can replace us in doing certain tasks, and their capacity for this will only increase over time. If we add advanced robotics these tasks move from the mental to the physical. Secondly they provide a useful tool so that others can manipulate us, particularly through social media. I myself have used AI to create a video that falsely, but convincingly, gives me an entire multi-ethnic team working on my podcast. And it is not beyond the realm of current AI to have that team available if someone called or zoomed in and wanted further information. In fact, there are entirely AI driven news services distributing both true false information right now. Finally it is possible that AI's, always imitative, will learn bad behavior from us and as they have better interfaces with the outside world they will direct it toward humans and other creatures. (https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html.)

These are problems, but all of them can be dealt with if we act as responsible citizens and live under good governance. 

The far bigger problem is that we humans ourselves will undermine our own humanity by regarding these AI's as something like us. And that happens when we come to see ourselves as merely embodied brains, and our brains as nothing more than complex organic computers - essentially as machines pitching out responses to inputs based on the probabilities of a successful outcome. The real challenge of AI, and it began with the advent of modernity and increasingly inhuman ways of understanding the human person, is not that AI will destroy humanity, but that we will willingly give up our humanity

We can continue to take advantage of AI, as we have other technologies, so long as we create it rather than letting it create us. And that means first and foremost respecting our own humanity by making care for our fellow humans our first and most important concern. It means making human fellowship a priority over gluing our eyes on screens full of infotainment. It means being actively involved with our fellow humans in governing of our lives together. It means caring for the creation whose well being is our responsibility and a necessity for our survival, but is of no consequence to AI's. 

For those of us who are Christian it means being the Church Christ called us to be. Not really that difficult.

Comments

  1. Thank you Human Robert :-) vbrgds Human Martin πŸ‘πŸ½πŸ‘πŸΏπŸ‘πŸ‘πŸΌπŸ‘πŸ»πŸ‘πŸΎπŸ³️‍πŸŒˆπŸ™πŸΌπŸ™πŸ½πŸ™πŸ™πŸ»πŸ™πŸΏπŸ™πŸΎπŸ™πŸΌπŸ•Š️πŸ•―️

    ReplyDelete

Post a Comment

Popular posts from this blog

The Regionalization of the Bible?

The Real United Methodist Church

UM Regionalization - Is it Fair?