AI is the new calculator(?) - #6
Did you know that AI does like half-naked Italians? Ok, this sentence is a bit click-bait, but if you read this issue you will understand what is true in this sentence - more than you imagine.
Hello dear readers! It has been a while, right?!
Here we are again with a very content-rich issue that I promise will make up for the lost time.
Themes like machine learning, natural language processing, large language model etc are really inflated these days: everyone is talking about them, everyday on every medium. But we cannot but accept that these technologies will be forever part of our daily life, whether we are computer scientists or not, like a ‘normal’ calculator did years ago.
In today's modern technological era, artificial intelligence (AI) has revolutionized the way we live, work and interact with the world. AI has transformed scientific and industrial settings by introducing cutting-edge technology that simulates human intelligence. ChatGPT - a machine learning model that leverages natural language processing techniques to generate human-like responses to text-based queries - has already transformed the job (and the life?) of many people.
Understanding, even superficially, these concepts and their applications is essential to know where we are going and how: today’s newsletter explores the importance of comprehending how these technologies work and how they’ll change our life.
Let’s start!
Something to read
Yes, this is a half-naked Italian guy, and the percentage you see changing is how much AI analysis rates the image sexually suggestive, or “racy”.
The guy in the image is Gianluca Mauro and together with Hilke Schellmann wrote a terrific article for the Guardian analyzing the phenomenon, an in-depth work that deals with issues that have become fundamental: ethics and biases in AI. The fact that these kinds of algorithms are often recreating societal biases
“means that people who tend to be marginalized are even further marginalized – like literally pushed down in a very direct meaning of the term marginalization.”
Mitchell, the chief ethics scientist at Hugging Face, said.
And if you think it is something that does not affect you personally, you are wrong: this will affect what you can publish on social media, how “far” your post will arrive, or it will decide whether your CV will be filtered as spam or not when you apply for a job position.
Hilke Schellmann is Assistant Professor of Journalism at New York University (and a lot of other things): her LinkedIn here.
Gianluca Mauro is AI advisor, entrepreneur and public speaker: his LinedIn here.
Here you can reach their articles appeared on The Guardian: ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies
Something to read 2
This second article is written for developers, or at least techies.
A journey through concepts like Large Language Model, Transformers, BERT, to arrive at a slightly more technical understanding of how ChatGPT works and how it was created, and therefore all chatbots built on LLMs.
If it's not enough for you to use what's on everyone's lips, but you want to understand what's going on under the hood as a computer scientist, read on:
Something to read 3
As I said at the beginning, this issue will be particularly rich in content, so here is the third must-read resource: March 20 ChatGPT outage: Here’s what happened
We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.
… this is how an article published by openai towards the end of March begins. A very stimulating article that explains the reasons behind the suspension of ChatGpt for a few hours: a bug in a famous open source library, as they say, masterfully managed by the mantainers.
This article is a further demonstration of the importance of open projects even in complex, large-scale systems. Probably without many open source projects a big part of modern-day innovation would not exist.
The proposed piece well describes the technical reasons behind the problem and the actions taken to correct or mitigate it:
March 20 ChatGPT outage: Here’s what happened
Something to watch
The Lex Fridman Podcast is a popular interview podcast hosted by AI researcher and engineer Lex Fridman. With over 300 episodes and millions of listeners, it's a platform for in-depth conversations with a wide range of guests, from scientists and entrepreneurs to artists and intellectuals.
In this episode (that you can watch or just listen) he interviews Eliezer Yudkowsky; Eliezer Yudkowsky is a well-known figure in the field of artificial intelligence and rationality. He is a researcher and writer who has made significant contributions to the development of AI safety and decision theory.
In this long video, they discuss the possible dangers of AI, its use in an uncontrolled manner and without an ethical analysis of its implications.
Something to watch 2 and 3 :)
As a disclaimer, I must first say that I love Jay Alammar: I love his straightforward way of explaining complex concepts with simple graphics and words that are within everyone's reach.
Here are some references if you want to get to know him better:
In the links above you’ll find a lot of content you’ll enjoy, but today here are my suggestions; two videos on two AI topics.
The first one: AI/ML has been witnessing a rapid acceleration in model improvement in the last few years. The majority of the state-of-the-art models in the field are based on the Transformer architecture: this video explains what a transformer language model is
The second video by Jay Alammar explains What is Generative AI.
What's the big deal with Generative AI? Is it the future or the present? In this video, Jay goes over four key reflections on how best to think of the current state of AI products and features, and avoid pitfalls people tend to make with new tech.
Something to watch 4
This ted talk has been suggested recently by Yann LeCun on his linkedin. We already met Yann in this newsletter, he is a computer scientist and a pioneer in the field of deep learning. He is best known for his work on convolutional neural networks (CNNs), which are widely used for image and speech recognition tasks.
In this talk by Yejin Choi, Professor at University of Washington, Yejin is pointing out that current LLMs, as useful as they are, make very stupid mistakes, revealing a complete lack of common sense.
Enjoy the video!
Something to try
You don't know how to explain to your 5-year-old son what 'lean startup' means (because we both know: your son will definitely be interested in this concept ;) )!? AI comes to our aid on this too.
ELI5 means “Explain like I’m 5”, and uses AI to explain concepts in a way that a 5-year-old could understand.
Let’s ask ELI5 what a newsletter is:
A newsletter is like a special paper that a company or person writes to their friends or customers. It tells them all about what they are doing and what they want to share with others. They might talk about new things they are selling or things they want to teach people. It's kind of like a letter, but instead of just writing to one person, they write to many people at the same time. People usually sign up or subscribe to get the newsletter so they can read it whenever they want.
Oli made it, you can connect with him on twitter too.
Something to bonus
Some say we will almost all be replaced by AI, some say it will help us to be smarter and will be indispensable.
Kent Beck expands his tweet you can see above in this substack article that, believe me, is less pessimistic than it may seem. He tries to analyse which of his skills AI will be able to replace, which tasks it will be able to perform in less time and faster; but also how it will help him improve his work to make it more indispensable and productive.
You can subscribe to his substack here.
As a conclusion to this newsletter I have to admit that I am concerned too about how AI may change my work and my life, especially as a software developer.
I do not believe that AI will replace us, but I think that knowing it and knowing how to use it will make a difference and our skills won’t be useless and reproducible by a machine if we continue to improve them with the help of new technologies. I hope my newsletter will help you in this.
See you soon ;)
David