Steven D Marlow
11 min readNov 7, 2021

--

Bite My Shiny Metal Ass

Have you ever seen a link to an article, and just knew it was going to give you an ulcer reading it? Yes, I know, why would you read it knowing it was going to bother you. That’s just the nature of using Twitter as a kind of trade journal.

What follows is a long-form live tweet thread done as a draft to an article in reply to one such post. I guess that just makes it a reply where I’ve shared TMI about the process involved in drafting it. Link to original post by TIME here.

There are two related threads you find in AI circles. One is about being an expert, and what that actually means. The other is about access, and what kind of barriers to entry people encounter. These of course overlap. Where anyone not working directly on AI research or ML projects (and I intentionally make a distinction between the two) might bring something valuable to the table, there being branded by media as an AI expert tends to ruffle feathers. This leads to some side-eye that gets interpreted as gatekeeping and the toxic downward spiral begins.

I’m not an academic. I’m not even a minor player in the ML space. And since no one has created a thinking machine, no one is really entitled to the crown of “expert” when it comes to the longstanding goal of AI. But, if you will permit me, when it comes to building that understanding, I am at least an expert with respect to one model or framework, even if it’s at the edges of conventional thinking.

Henry Kissinger is not. I’m not being ageist. I’m not saying people can’t learn new things. What I am strongly suggesting is that many already famous people get a free pass for the mere association with whatever topic is important for the day. I doubt his credibility on the subject matter. To highlight what understanding of subject matter does look like, I’d point to this truly impressive interview by William Shatner (How far will AI go?).

As the article starts out, we learn that Kissingers “intro” to the field of AI came only after being invited to attend a lecture on the topic in 2016. The post is based on an interview related to a just published book where he is given “top billing”. And since I’ve already shown my ire toward the article, you’ll understand why I’m not including a direct link to the book.

Things don’t improve when he states that “the technological miracle doesn’t fascinate me so much” when asked about his interest in AI. For added context, also part of the interview was former Google CEO Eric Schmidt (the same person that had invited him to said lecture a few years earlier). This highlights a trend of “tech leaders” falling into some kind of born again attitude toward society and the potential harms of their own making.

From an unscientific glance at the people within the AI community, that are active in public discourse, a rather large number bring a philosophical perspective to AI and technology. In fact, the biggest issue outside of research (and dubious product development) is the social impact, framed as both an ethical issue and a legal one. So, again, for Kissinger to say there is no philosophy or dominant philosophical view in which to keep this new technology in check shows he lacks even a surface level understanding of the subject matter.

I can’t tell if Kissinger is saying the quite part out loud, or if he just lacks situational awareness, but he unknowingly calls-out the way he is being manipulated. His open criticism for one company controlling the flow of data (and by extension, truth) isn’t a direct condemnation of Google, but even Schmidt’s efforts to at least deflect criticism if not directly convert Kissinger into a true believer of the larger mission, wasn’t enough to keep him from pointing to the dangers of such concentrated power. *Note that the focus is on private companies; his view on Government having the same data collection strategy, or media that shapes the “news” to favor one political ideology over another, doesn’t seem to be an issue. **Nah, he flips it around on them later in the interview.

I’m critical of “AI” getting a bad wrap for the things “ML” has done because I see them as being diametrically opposed (a kind of rivalry that goes back to the earliest days of AI). While “AI” is the brand name with clout and recognition required to sell a book, the real slight of hand is the suggestion that technologists (some nebulous, un-branded and totally not connected to Google group of people) are the real culprits we should be looking at. One only needs to look at the creation of ethics groups within the leading tech giants to see how this “it’s not us, it’s them” idea is meant to play out.

There is a question about the downside of technology companies with a focus on valuation and profit over “social welfare” and being a productive member of society rather than using technology to directly manipulate the masses. Kissinger, rather than having an understanding of said technology, simply loops back to his own view on this new age of human consciousness. A new form of enlightenment where technology can give us answers without having to show its work. However, claims of social advancement need to be taken with a handful of salt, as the flag planting headlines (that have a positive impact on stock value) usually don’t hold up well a year or so later. That someone so ill informed is being given a platform on which to “inform” the general public is a whisper network done at scale (a signature of Silicon Valley).

I always roll my eyes when some political slight is inserted into a book, lecture, or interview. So here is me doing the same. Feel free to tweet your reaction. Kissinger, when asked about AI and geopolitics, does a Biden: “Like every artificial intelligence, they are more effective at what you plan. But they might be also effective in what they think their objective is.” We get what he is trying to say, but trying to unpack it at face value is tough for humans. Multiple external references are required. Good training for AI’s, I guess.

Ships that can navigate the open ocean on their own, and jets that can autonomously function as “loyal wingmen” are still in the early days of research, and timescales for effectiveness depend on which camp you are in. For Team ML, this is a natural progression of other forms of autonomous driving and flying, so, is a mostly solved problem. For Team AI, none of these systems has the actual understanding to be effective and trustworthy on their own.

Human-in-the-loop and calls to ban “killer robots” is more of that “missing philosophy” he spoke of, so there is no clear path toward actual deployment of such aircraft at such scale that would lead to some kind of AI vs AI conflict between the US and China in the near future. Really just another boogieman to hold public interest and be the wind that blows politicians in the direction Big Tech wants.

I need to throw some shade at TIME for not addressing this in some way, but another quote of his went unclarified. “The Deep Think computer was taught to play chess by playing against itself for four hours. And it played a game of chess no human being had ever seen before. Our best computers only beat it occasionally.” You ever have a shirt or pair of pants that’s just so dirty you throw it away rather than try to clean it? There is no cleaning that last quote. You can say he was refereeing to Deep Blue, but there is no asterisk in the world big enough to fix the rest of it.

We’ve clearly established that Kissinger on AI is like a 4th grader on quantum mechanics, yet this isn’t even the half way point of the original article.

Schmidt replies to a question about the speed of progress and tries to use information overload as an example of humans already losing some capacity to think for themselves. As he states that people are addicted to the flow of information, I can almost see the person asking questions looking toward the reader in a kind of 4th wall break. Not the most arrogant of replies, but his connection to the book and being interviewed along side Kissinger smacks of “running cover” for Google in the face of unstoppable accusations. He also deflects on the issue of who controls the AI’s that provide much needed assistance, how is bias managed, what are the regulations around AI going to look like, etc.

Now, he is the one to bring up this idea that humans are burdened by having to much access to information, and suggests that AI can make life easier for us. Big Tech has already positioned themselves as both moderator and access point for data and information, even when it’s your own data! Big Tech has also turned “screen time” into an addiction. Great news that Big Tech now wants to create a digital monkey that will sit on your shoulder and free you from the heavy chore known as agency.

The historical dual use of technology is glossed over in an effort to focus more on the potential downsides of AI, and we get to the beneficial AI stuff (without making reference to existing efforts). Kissinger again returns to calling for a new kind of philosophy to guide research, and I guess, by extension, researchers. In the academic world, the move to include more humanities in the technical education pipeline has been underway for a few years. AI Ethics has been the de facto “philosophy of control” going back a few more years. The fruits of that work have moved into the actual legislative process, at both National levels and for the European Union.

It would be a tangent to describe my frustration with that whole process as there was never anything “actionable” for people doing research. A guide or checklist was never enough, and most suggestions involved having to be an ethicist first, or at least be deeply immersed in ethical literature (but as a “tech bro” your input was always rejected). It’s also the case that “AI” is the bad guy, but all references to harms, dangers, or violations of rights stem from the use or training of ML systems. The result is a formulation of laws that don’t target the core issues directly (as if Big Tech played a not so small role in making sure no one bit the hand that fed them), but may be harmful to research that breaks from current trends. Hmm, Big Tech trying to prevent the creation of disruptive technology not under their control?

Schmidt echo’s the idea of a philosophical framework to examine what limits, if any, should be placed on emerging AI tech. While saying Big Tech shouldn’t try to do this alone, and that scientists and policy makers need to be involved, it plays to those not already jaded by the knowledge that Big Tech has “access” to anyone else that might be selected to participate, and thus, can ensure Big Tech remains the guiding hand. Meanwhile, actual philosophical debate continues to go unsupported and under-appreciated.

On the question of international efforts, Schmidt refers to the people that have been thinking about these issues as “relatively elite groups” and I’m glad my desk is in an unflippable state. What an ass hole. Given that the article doesn’t represent every reply, I can imagine there was a part removed that clarified the idea, saying more diversity was needed, but from the context, he really implied that “the best people” are already working on this, and it’s only a matter of bring them together to share notes.

Kissinger and Schmidt then get into the idea of AI operating faster under combat conditions than human operators can keep up with, and how that needs to be an area of regulation. It’s also another failure to acknowledge existing debate (as if what they say in the book is meant to lead the community in a new direction, rather than the more likely “nothing new” reception).

Schmidt is actually called-out for Googles lead in being the very thing he and Kissinger seem to be warning about, and while he quickly admits “guilt” he is just as quick to bring out the ‘so did many other people’ defense. And that’s it. A softball of a criticism and the interview continues as before.

There is an interesting question about trying to be ahead of the technology vs always having to deal with it after it arrives. This was an issue faced by AI Ethics, where all obvious challenges were post-harm. There was never an example of “ethical guidelines” that would prevent a theoretical outcome. Schmidts answer is one of those political insertions, where he’s all upset about the internet being used by government to interfere with an election, or the spread of anti-vax stuff. Zero question about YouTube (Googles “sister” company) showing favoritism toward one political party over another for years, or that Big Tech is the one creating all of the tools then being shocked when ‘socially unfavorables’ are shown to have used them. We’re talking about skyscraper levels of hypocrisy.

*I like the way Schmidt says he could have lobbied in a different way when Kissinger asks what he would have done if he had known the outcome ahead of time. And his “solution” to fighting misinformation is a ranking system for information sources, which is a totally new concept for Google. /sarcasm

In an interview about their new book which is (probably not) focused on AI technology, he delivers this gem of a quote: “We can typically predict technology pretty accurately within a 10-year horizon, certainly a five-year horizon.” I picked the wrong article to write a formal reply because I’d rather not give it any more of my time. Just when you think these two can’t sound any less informed… For Kissinger, it’s just not worth it to study AI because he clearly has no real interest in it. For Schmidt, former CEO of Google, it HAS to be theater. His answers are the kind you would expect from Congressional testimony. His actual knowledge of the subject, if you want to play backseat cynic, is likely tied-up with the kind of plans you don’t dare inform the general public about (for legal and political reasons).

Kissinger ironically walks right past the central theme of the book (that AI is a rapidly changing technology with an unknown impact on society). He mentions that no one could have foreseen the internets impact on politics. In those obscure philosophical circles of non-elite thinkers, the impact of AI and automation on our lives, our actions, and even what we know of the world is given serious debate.

Speaking of the book, Schmidt says the goal was to lay out all of the problems, so that those elite working groups could come up with solutions. I doubt the book mentions even one thing that hasn’t been brought to everyone’s attention already, and that list is getting a bit long in the tooth (like, “that old example” levels of repetition).

I’ll end this with something mentioned in Schmidts final answer: “I grew up in the tech industry, which is a simplified version of humanity. We’ve gotten rid of all the pesky hard problems, right?” Well there it is, the mindset of those trying to advance technology on your behalf (at least that’s what they say publicly).

*The image used is even from the Futurama episode “War is the H-Word” that not only includes (the head of) Kissinger but is about the phrase “bite my shiny metal ass” being Bender’s most uttered.

--

--

Steven D Marlow

I'm applying for the mad scientist position. Have robot. Will travel.