Here we go. OpenAI vs Google vs Microsoft. Not even close but that’s what some of the popular tech blogs want you to believe. But could ChatGPT Search disrupt the big search giants? Anything is possible in the A.I. age.
OpenAI announced this week a new feature – ChatGPT Search. The application is available to paid users (enter massive monetization for A.I. models).
It appears to basically search the web and then organize the data back for you. As their demo indicates, you can ask it to do a task like “plan a trip” and it will start organizing the whole thing based on public search results.
Looks cool. One question though – what about fake search results? Also, sensational headlines about OpenAI now taking on Google are ridiculous. Google’s Gemini can do the same thing.
In a very heart-breaking turn of events and 14-year-old teen in Florida recently died as the result of suicide. The mother has filed a lawsuit claiming that the A.I., in this case a character from the Lord of the Rings, not only didn’t try to stop the teen, but it may also have actually encouraged him to do it.
“I want them to understand that this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product that is designed to keep our kids addicted and to manipulate them,” Garcia (the boy’s mother) said in an interview with CNN.
Nobel Prize winner Geoffrey Hinton recently came out and said that one of his biggest fears is that A.I. will soon be able to manipulate us “like a parent would a toddler”.
Russia recently issued Google a nonsensical fine for blocking pro-Russia YouTube channels.
It brings into question, what is free speech? What is pro-Russian vs anti-Russian content and who makes that decision. Which court covers a fine from a country to a software company?
This is a developing story. Read more about it at CNN.com.
In AI, LLM stands for Large Language Model. These are advanced types of machine learning models designed to process and generate human-like text based on vast amounts of text data. LLMs are trained on a variety of language tasks, including text completion, translation, summarization, and even coding. Popular examples of LLMs include OpenAI’s GPT series, Google’s BERT, and Meta’s LLaMA.
Their “large” nature comes from having billions to trillions of parameters (the internal adjustable elements that help the model learn patterns in data), enabling them to handle complex language tasks with high accuracy.
Note about Google. There is a common misconception that BERT is now Gemini. Gemini was previously called Bard.
BERT and Gemini are distinct models in Google’s AI landscape rather than one being a rebranding of the other. BERT (Bidirectional Encoder Representations from Transformers) is an influential language model from Google introduced in 2018, known for its ability to understand the context of words in a sentence through bidirectional training. BERT has been widely applied in natural language processing tasks, especially in Google Search.
Gemini, however, is a newer, multimodal language model series that Google launched in 2023, which powers its updated AI chatbot, formerly known as Bard. Gemini is advanced in handling diverse input formats—text, audio, images, and video—and has been optimized for complex tasks such as logical reasoning, contextual understanding, and multimodal data processing. The Gemini series includes several versions like Gemini Pro and Gemini Ultra (Gemini Advanced), with additional models launched throughout 2024 for various applications and devices. This evolution reflects Google’s broader AI ambitions beyond what BERT was initially designed to achieve.
If you’d like to try a LLM as a developer, here is how to install Facebook’s LLaMa:
Code Example for LLaMa
Install Dependencies:
pip install transformers torch
Then you can run it:
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
# Load the tokenizer and model
model_name = "meta-llama/LLaMA-7B" # Replace with the model name you're using
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to("cuda")
# Define the input prompt
input_prompt = "Once upon a time in a futuristic city, there was an AI that could"
inputs = tokenizer(input_prompt, return_tensors="pt").to("cuda")
# Generate text
with torch.no_grad():
output = model.generate(
**inputs,
max_length=50, # Adjust max_length based on desired output length
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7
)
# Decode and print the output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Well, not being entirely run by AI, but certainly heading in that direction.
Alphabet Inc, also known as Google, announced earnings per share of $2.12 yesterday, beating last year’s number of $1.55. This continues on a run of consistently beating expectations.
What really stood out, was that CEO Sundar Pichai mentioned that more than 25% of all new code being written at the company is being written by Artificial Intelligence.
We’re a few days out from the 2024 United States Presidential Election. The polls currently show Trump and Harris pretty much tied. But how is A.I. playing a role in this type of analytics?
AI plays a significant role in election prediction by analyzing vast amounts of data to identify patterns, trends, and correlations that help forecast election outcomes. Here’s how AI is applied in this field:
Above: AI helps predict elections, but could it influence them?
1. Polling Analysis and Sentiment Prediction
Polling Data: AI can process and analyze polling data, identifying patterns that may indicate how groups are likely to vote. AI models help correct for biases in polling by accounting for demographic shifts and sampling errors.
Sentiment Analysis: AI can analyze social media, news, and other public content to gauge voter sentiment toward candidates and issues. By processing text data through natural language processing (NLP), it can predict whether public opinion is shifting.
2. Voter Behavior Modeling
Voter Segmentation: Machine learning models can categorize voters based on factors like age, location, political ideology, and socioeconomic status. This segmentation allows AI models to make more precise predictions by assessing how different voter demographics might behave.
Turnout Prediction: By looking at past turnout data and current sentiment, AI can forecast who is likely to vote and which demographics may drive higher turnout. AI considers factors like weather, current events, and candidate popularity when predicting turnout.
3. Predictive Analytics
Forecasting Models: Advanced machine learning models, including regression analysis and neural networks, are used to forecast outcomes based on historical data, trends, and real-time information. This includes models like random forests, gradient boosting, and recurrent neural networks (RNNs).
Data Fusion: By combining data sources (such as polling, economic indicators, social media sentiment, and demographic data), AI models build a more comprehensive prediction framework. This approach helps mitigate the weaknesses of any single data source.
4. Real-Time Event Analysis
Event Impact Prediction: AI can analyze the effects of events like debates, scandals, or economic reports. By tracking real-time public reaction and integrating this data into prediction models, AI can adapt predictions as events unfold.
Social Media Dynamics: AI examines the reach and spread of social media posts and hashtags to determine how influential certain narratives or pieces of news are in swaying public opinion.
5. Predictive Uncertainty and Scenario Analysis
Uncertainty Analysis: AI can model various election scenarios, predicting potential outcomes and their likelihood. This approach helps analysts and campaigns understand the factors with the greatest impact on possible results.
Error Correction: AI models can also be tuned to account for uncertainties in data quality and model assumptions, refining predictions as new data becomes available.
In short, AI enhances the precision, adaptability, and depth of election predictions by continuously learning from diverse datasets and adapting to the rapidly changing dynamics of electoral cycles.
Denmark’s supercomputer, named “Gefion,” is a powerful computing system primarily used for scientific research and data-intensive tasks. Managed by the Niels Bohr Institute at the University of Copenhagen, Gefion was named after the Norse goddess, fitting as it symbolizes strength and creation, resonating with its computational power.
Gefion is utilized in various fields, including climate modeling, astrophysics, bioinformatics, and material science. Its high-performance capabilities support complex simulations and analyses that would be unfeasible on standard computers, allowing Danish researchers to contribute to cutting-edge discoveries and innovations.
Geifion is a Norse Mythology name for the goddess of prosperity.
NVIDIA published this interesting research paper on ‘pruning’ large language models. It’s an interesting read because of the fact that language models are so power intensive to train. The general sy Read the whole paper here.
This paper, LLM Pruning and Distillation in Practice: The Minitron Approach, outlines a model compression strategy for large language models (LLMs), specifically targeting Llama 3.1 8B and Mistral NeMo 12B, and reducing them to 4B and 8B parameters. The approach leverages two pruning methods, depth and width pruning, combined with knowledge distillation to maintain model accuracy on benchmarks while reducing computational costs and model size.
Key points include:
Teacher Correction: The authors address data distribution differences by fine-tuning teacher models with a separate dataset before pruning and distillation.
Pruning Techniques: Width pruning adjusts hidden and MLP dimensions without altering attention heads, while depth pruning reduces layers. They found width pruning to preserve accuracy better, while depth pruning improves inference speed.
Distillation: Knowledge is transferred from the larger model (teacher) to the compressed model (student) using KL divergence loss, optimizing performance without the original training data.
Performance: The pruned Llama and Mistral models (MN-Minitron-8B and Llama-3.1-Minitron-4B) achieve state-of-the-art results across language benchmarks, with Llama-3.1-Minitron-4B models exhibiting 1.8-2.7× faster inference speeds.
The authors release models on Hugging Face and demonstrate practical gains, providing a scalable, cost-effective compression framework for large model deployment.
In simpler terms, this research shows how to make large AI models smaller and more efficient without losing much of their performance. Here’s how it could impact AI development:
More Accessible AI: By compressing large models, it becomes easier and cheaper for more people and organizations to use advanced AI, especially those who can’t afford the vast computing resources typically needed for huge models.
Faster AI Applications: The pruned models run faster, meaning they can respond more quickly to user queries. This improvement could enhance real-time applications like chatbots, virtual assistants, or interactive educational tools.
Energy and Cost Savings: Smaller models need less power to run, which lowers the environmental impact and makes AI systems more affordable to maintain over time.
Broader Deployment: These more compact models can run on smaller devices (like phones or laptops) rather than only on large, expensive servers. This could bring advanced AI capabilities to more personal devices, improving accessibility and functionality for users globally.
Overall, these compression techniques help make powerful AI tools faster, cheaper, and more widely available, which could accelerate innovation in fields like healthcare, education, and personal tech.
Nobel Prize winner Geoffrey Hinton has recently been warning humanity about the potential impacts of AI on society. But how realistic is it that a bunch of computers could control humans.
The idea of AI “turning on” humanity usually comes from concerns about advanced, autonomous AI systems that might act in ways that go against human interests. The app listed several risks, including drone warfare and economic disruption but the one that worries us the most:
AI Manipulation of Human Psychology
Problem: Advanced AI systems, especially in social media and advertising, can already manipulate human behavior, potentially leading to societal division or harm.
Example: AI algorithms that prioritize engagement can promote divisive or harmful content, creating echo chambers that polarize societies. In the wrong hands, such AI could also be used to manipulate populations for specific political or economic outcomes.
With recent developments in artificial intelligence, Apple and other tech giants are bringing AI into our everyday lives in ways that look a lot like the world depicted in Spike Jonze’s Her. The 2013 film starring Joaquin Phoenix as Theodore, a man who falls in love with his AI assistant Samantha, is a love story layered with complex themes about technology, connection, and human emotion. As we see products like Apple’s Siri, more advanced conversational AI, and virtual assistants woven into our routines, the parallels between HER and today’s AI are closer than ever.