Artificial Intelligence

Optimus Robots are Impressive

The Robotaxi event had an interesting side-effect. The Cybercab and Cybervan had mixed reviews, with Uber, Lyft and Waymo actually getting a boost post event.

We’ll have to wait to see how that race for autonomous taxis plays out.

What was pretty impressive, however, were the Optimus robots mingling amongst the crowds. Even serving gifts and interfacing with the guests with no apparent issues.

Were these remote controlled? Or are humanoids here? Suddenly, we’re not so bored with humanoids.

Robotaxi Overview

In a typical staged production last night, Elon Musk and Tesla showcased Robotaxi.

The concept is of course great. Taxis and Ubers barely need a driver now, are often undersupplied during events or peak times, and could easily be completed by a computer. Tesla has had self-driving for a while so it’s no surprise they want to compete with Uber and Lyft now.

Musk put on the We, Robot party.

Elon Musk Unveils Robotaxi

The Cybercab, one of the models, has no steering wheel or pedals. Musk describes it as individualized mass transit. Musk explained that the average cost of mass transport is generally $1 or more per mile vs $0.30 per mile with Tesla’s new vehicles. In the past, he’s hinted that Tesla owners could rent their cars out to become taxis and earn income in the future.

The company expects to launch Robotaxi with Model 3 and Model Y in California and Texas in 2025 with the Cybercab model launching towards the end of 2026.

The car uses inductive charging, which means it has no plug.

Who are John Hopfield and Geoffrey Hinton?

Hopfield and Hinton were just awarded the Nobel Prize for Artificial Intelligence

John Hopfield:

John Hopfield is a physicist and neuroscientist best known for his contributions to theoretical neuroscience and artificial intelligence. He developed the Hopfield Network in 1982, a type of recurrent neural network that provided a foundation for modern AI and machine learning models. His work bridged physics, biology, and computation, emphasizing how systems of neurons can perform complex computations. Hopfield is also renowned for his work on associative memory and has made significant contributions to molecular biology. He has been a professor at Princeton University and the California Institute of Technology.

Geoffrey Hinton:

Geoffrey Hinton is a cognitive psychologist and computer scientist widely regarded as one of the pioneers of deep learning. He is known for his foundational work on artificial neural networks and the development of backpropagation algorithms, which are central to training modern deep learning models. Hinton’s work laid the groundwork for innovations in image recognition, natural language processing, and AI in general. He is a professor emeritus at the University of Toronto and has worked at Google as a key figure in their AI research division. Hinton has received numerous accolades, including the 2018 Turing Award, often referred to as the “Nobel Prize of Computing,” shared with Yann LeCun and Yoshua Bengio.

What is unique about Hinton: he’s an expert but also sounding alarm bells for society. Hinton believes A.I. will help us in healthcare but is concerned that the technology will eventually take over, and become very good at manipulating us (See Apple Intelligence).

Swamp the Vote – Elon Musk Trump Rally Speech

“Free speech is the bedrock of democracy. If they don’t know the truth, how can you make an informed vote….President Trump must win to preserve the constitution, to preserve democracy…this is a must-win situation.”

Watch the full speech below. Musk had one request, there are only 2 days left to register to vote in Georgia and Arizona, everyone needs to register and refer others to vote via a site SwampTheVote.com.

Tulip Mania and Why We’re Underestimating A.I.

There’s a chart going around that compares Nvidia stock to Tulip Mania growth. Tulip Mania is a historical financial crash that started around a craze for Tulips.

This is different.

A.I. is already more impactful than electricity. What is coming next in terms of wealth creation, societal change and disruption from A.I. isn’t fully understood. Nvidia provides much of the infrastructure for modern A.I. to compute on.

It’s much bigger than anything humans have conceived up until this point.

What is TensorFlow? (Simple Explanation)

Imagine you’re building a robot that can recognize pictures of your favorite animals—dogs, cats, maybe even dragons! To help your robot learn to tell the difference between a dog and a cat, you’d need something super smart, like artificial intelligence (AI). But how do you teach your robot? That’s where TensorFlow comes in.

What is TensorFlow?

At its core, TensorFlow is a tool that helps computers learn to do things on their own, just like how you learn from practice. But instead of practicing soccer or math, the computer practices by looking at data. TensorFlow makes it easier for computers to practice and get better at tasks like recognizing pictures, understanding speech, or even playing games.

How Does TensorFlow Work?

Let’s break it down with an example:

  1. The Goal: Say you want your robot to look at a picture and say if it’s a dog or a cat.
  2. Training the Robot: First, you give the robot thousands of pictures—some of dogs, some of cats—and tell it which is which. The robot doesn’t know at first, but with TensorFlow, it starts to learn. It looks for patterns, like the shape of the ears, the size of the nose, or the fluffiness of the fur.
  3. Making a Guess: After practicing on these pictures, the robot gets good at recognizing the patterns. Now, when you show it a new picture, it can make a pretty good guess whether it’s looking at a dog or a cat.
  4. Getting Better: The more pictures you show the robot, the better it becomes at making the right guess! This is called machine learning—computers get better at tasks by practicing with lots of examples.

Why the Name “TensorFlow”?

The word “TensorFlow” sounds complicated, but it’s just made up of two words:

  • Tensor: A fancy word for numbers or data that the computer looks at. Think of it like a big list or a grid full of information.
  • Flow: This is how the data moves through the system and how the computer learns from it. The data “flows” through different steps (called layers) until the computer makes a decision.

What Can You Do with TensorFlow?

There are tons of cool things TensorFlow can help create! Here are some fun examples:

  • Self-Driving Cars: TensorFlow helps cars learn to drive by recognizing stop signs, other cars, and even pedestrians.
  • Voice Assistants: When you ask your phone something like “What’s the weather?”, TensorFlow helps it understand what you’re saying and give you the right answer.
  • Translating Languages: If you’ve ever used a translation app, TensorFlow helps by recognizing words in one language and changing them into another language.

Why Is TensorFlow Important?

Before TensorFlow, teaching computers was really tricky and took a lot of time. TensorFlow made it easier for people—scientists, engineers, and even students—to build smart systems without having to do all the hard work from scratch. It’s like using a calculator instead of doing long division by hand!

Now, with TensorFlow, AI is more accessible, and we see cool advancements all the time. It’s even used in games, art, and health care!

How Can You Start Learning TensorFlow?

If you’re curious about TensorFlow and want to build your own smart projects, you can start small:

  • Try using coding websites like Scratch to understand basic programming.
  • Explore tutorials online that introduce AI and machine learning.
  • And if you’re really into it, you can download TensorFlow for free and start experimenting!

In a nutshell, TensorFlow is like a super-smart teacher for computers. It helps them learn from examples, improve with practice, and do amazing things—like identifying animals or even driving cars! Whether you’re into robotics, gaming, or science, TensorFlow opens up a world of possibilities for you to explore.

Who is Noam Shazeer?

Who is this person that Google essentially acquired back after he left in frustration. The price tag: $2.7B.

The A.I. talent war is heating up!

To start with, Noam Shazeer is a name you’ll probably start hearing more of. He is currently a Google VP but previously was very instrumental in Google’s Gemini project. He is a prominent computer scientist and engineer known for his groundbreaking work in machine learning and natural language processing (NLP). He has made significant contributions to the field of artificial intelligence, particularly in the development of models and architectures that power modern NLP systems.

Some of his notable contributions include:

  1. Transformer Architecture: Shazeer was one of the key co-authors of the 2017 paper “Attention is All You Need,” which introduced the Transformer model. This model revolutionized NLP by improving the way machines process language through attention mechanisms, leading to major advancements in language models like GPT, BERT, and others.
  2. TensorFlow and Google Brain: He has been closely associated with Google Brain, where he contributed to the development of TensorFlow, a widely used open-source machine learning library. His work at Google involved large-scale machine learning projects and the optimization of AI systems for better performance and scalability.
  3. Mixture of Experts (MoE): Shazeer worked on the Mixture of Experts model, which is a scalable deep learning architecture that can allocate different parts of a model for different tasks. This approach helps in scaling models efficiently while maintaining high-quality performance in specific tasks.
  4. Co-Founder of Character.AI: In 2021, Shazeer co-founded Character.AI, a startup focused on building conversational AI systems that allow users to interact with characters and personalities simulated by AI. This project aims to push the boundaries of human-AI interaction.

Shazeer’s innovations have had a profound impact on the development of AI, influencing both research and commercial applications in the field of NLP.

The Age of Intelligence according to Sam Altman

Sam Altman, the founder of OpenAI recently said something interesting about the evolution of A.I…

According to Altman, it will evolve like this:

1. Data: The building blocks of A.I.
2. Models: Intelligence built on data
3. Agents (Current): Semi-intelligent bots using models to assist humans
4. Innovators: Entrepreneurs putting the pieces together to make wealth by solving problems and creating products
5. Organizations: Businesses totally formed and run by A.I.

He thinks we’re ‘months’ away which could mean years but it’s close. Read Sam’s full blog post here.

“It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI”

Sam Altman
September 2024