- AIdeations
- Posts
- AI Deepfakes in Politics, AI Revolutionizing Programming & The Reality of AI Detectors
AI Deepfakes in Politics, AI Revolutionizing Programming & The Reality of AI Detectors
A roundup of todays AI developments, discoveries, controversies, and innovations
What's up ya'll, this is AIdeations. The go-to newsletter that takes AI and tech news that slaps and turns it into a no-bs, fun email for you each day.
TL;DR
Todays Aideations newsletter covers a wide array of topics, from the controversial use of AI deepfakes in political campaigns to DeepMind's AI revolutionizing sorting algorithms. We also dive into the questionable reliability of AI paper detectors and the innovative AI design aiding IKEA in creating a sofa that fits in an envelope. In the latest news, we explore the increasing potential of AI in healthcare and its implication on the future of AI chatbots, and cover a lawsuit filed against OpenAI. Lastly, we examine a research paper detailing the creation of Orca, an AI model that learns from GPT-4, providing a promising step in AI model learning and imitation.
If you've got suggestions on how I can improve the newsletter, feel free to reach out at [email protected]
Here's what we've got in store for you today:
🥸 DeepFake Political Ads
🐖 99% Accurate AI Detectors? Let's Talk When Pigs Fly
♟️ DeepMind's AI Reinvents Sorting Algorithms
🛋️ AI Helps IKEA Put A Sofa In An Envelope
📚 Research Of The Day
🎥 Video Of The Day
🛠 Tools Of The Day
🤌 Prompt Of The Day
DeSantis' Team Turns to AI Deepfakes for Anti-Trump Messaging

In a recent twist of political strategy, the campaign for Ron DeSantis, gearing up for the 2024 Republican presidential nomination, is making headlines for its use of AI-generated deepfakes. Their "DeSantis War Room" Twitter account posted an attack ad against Donald Trump, showing Trump in what seems like a close relationship with Anthony Fauci, a figure often criticized by right-wing politics. The catch? Some of those cozy moments between Trump and Fauci appear to be the handiwork of generative ai technology.
The video contains real footage of Trump discussing Fauci, mixed with a six-photo collage showing the two together. On close inspection, half of those images, showing Trump embracing Fauci, seem to have been crafted not in reality but in the silicon minds of AI. The details gave away the game: glossy, blurred textures, improbable poses, and a dubious rendition of the White House press briefing room. Notably, one image showed a sign in the background with garbled text and a color mismatch - things that our current AI image generation systems struggle to perfect.
Hany Farid, a recognized expert in image forensics, and Siwei Lyu, a digital media forensics expert, both independently concluded these images to be deepfakes. They drew attention to the fact that these images could not be traced back through reverse image searches, a strong indication of their artificial origin. It's a sobering development, as AI deepfakes become a mainstream tool in political communication.
Earlier instances include Trump's team sharing an AI-created image of him praying, an audio deepfake poking fun at DeSantis’ campaign launch, and the Republican National Committee publishing an attack ad featuring AI-generated imagery after Biden's 2024 re-election announcement. This trend complicates the task of distinguishing genuine content from manipulated media, especially when it's designed to reinforce existing biases. As political campaigns continue to blur the lines, we must all stay vigilant in the face of these deepfake realities.
From Chess Champ to Code Guru: DeepMind's AI Reinvents Sorting Algorithms

MidJourney Prompt: Imagine a scene that captures the transition from a chess champion to a coding guru. In the foreground, we see a chess board with pieces arranged in the middle of a game, reflecting the strategic mind of a chess player. The chess board is on a wooden table, lit by a soft, warm light from a desk lamp. The background transitions into a more modern setting, with a high-resolution computer screen displaying complex sorting algorithms in a code editor. The screen's cool, blue light contrasts with the warm light on the chess board. The room is dimly lit, emphasizing the two sources of light. The photo should be taken with a Canon EOS 5D Mark IV DSLR camera, EF 50mm f/1.8 STM lens, with a resolution of 30.4 megapixels, ISO sensitivity: 32,000, Shutter speed 8000 second. The style should be hyper-realistic, with high detail and sharp focus on both the chess board and the computer screen. --ar 16:9 --v 5.1 --style raw --q 2 --s 750
Google's DeepMind has just put a fresh spin on sorting algorithms using their AI, AlphaDev. The twist? Treating programming as a strategic game. This unconventional approach follows DeepMind's prior success with AI that taught itself to conquer games like chess, Go, and StarCraft. Now, it's applying the same learning strategy to optimizing code, which has resulted in improvements to sorting algorithms that are likely executed trillions of times daily.
Here's the scoop: AlphaDev uses reinforcement learning to minimize the latency (or time delay) of the code while ensuring it runs without errors. Unlike traditional methods that rely heavily on human-made code examples, AlphaDev generates and evaluates its own code examples, learning from successful combinations of instructions and their effects on sorting efficiency. The outcome? AlphaDev can develop highly efficient sorting algorithms in an autonomous and innovative way.
AlphaDev's prowess shone through when optimizing functions that handle specific sorting scenarios, such as sorting three, four, or five items. The AI managed to shave off an instruction from most functions, resulting in better performance. It even managed to rewrite a function that sorts up to four items with a strategy that outperformed the existing code, illustrating the benefits of this AI-led innovation.
The successful application of these AI-optimized sorting algorithms into the LLVM standard C++ library marks a significant step forward for computer programmers. It's a clear testament to the power and potential of AI in making code more efficient and potentially transforming the way we approach programming in the future. DeepMind's success story is an exciting nod to the continued growth of AI in the tech world, making things faster, smarter, and perhaps just a little more surprising.
99% Accurate AI Detectors? Let's Talk When Pigs Fly

MidJourney Prompt: flying pigs photorealistic --ar 16:9 --v 5.1
It's time we had a chat about this new sheriff that's trying to make a name in the wild west of academic writing. I'm talking about the supposed 99% accurate AI detector, developed by the University of Kansas. The idea? It's gonna sniff out AI-generated academic papers like a bloodhound on a trail. They've even put this AI-sniffer through the paces, getting it to analyze 128 articles, all courtesy of ChatGPT. And voila! It separated the human and AI-written stuff with pinpoint accuracy, or so they say.
But here's the twist, folks. Not all that glitters is gold, and certainly not these AI detectors. Let's jump in the time machine and revisit the case of our friend at UC Davis, falsely accused of academic dishonesty, thanks to her paper being flagged as AI-written. I mean, come on, what happened to innocent until proven guilty? Fortunately, time stamps came to the rescue, proving her innocence and highlighting the fallibility of our AI police.
Fast forward a bit and we find a Texas A&M Commerce professor in a pickle. Apparently, ChatGPT played the accuser, claiming more than half his senior class had used the AI to write their papers. I don't know about you, but that smells fishy to me. I've seen these tools in action, and let me tell you, they're about as reliable as a chocolate teapot.
Now don't get me wrong, OpenAI's ChatGPT is no angel. It has given teachers a headache since it went public in November 2022, turning into an overnight sensation with over a million users in record time. This led some schools to show it the door. In response, the market has been flooded with tools claiming near-perfect detection rates. But take it from me, they're not all they're cracked up to be.
Case in point, OpenAI's Classifier. Now there's an AI tool that can't tell a human from a robot, with a laughable success rate of just 26%. Other contenders like Turnitin, Copyleaks, and Winston AI are all singing the same tune, boasting a detection accuracy of around 98-99%. But trust me, they're more sizzle than steak. I've tried almost all these tools and let me tell you, with good descriptive prompting, they're about as effective as a blindfolded darts player.
While these AI detectors strut around making grand claims, they've got a long way to go before they're ready for prime time. Until they can consistently distinguish AI-generated content from human-written text without crying wolf, I'd say it's too soon to roll out the red carpet for them. It's a classic case of 'Emperor's New Clothes', and folks, I'm here to tell you, the emperor is stark naked.
Sofa in an Envelope: IKEA's AI Does Magic!

The brainiacs at Space10 (IKEA's indie design studio) have just created a flat-pack sofa that weighs 22 lbs and — drumroll, please — fits in an envelope. They teamed up with Swiss designers, Panter&Tourron, threw some machine learning into the mix, and poof! A modular, lightweight, envelope-ready couch.
The magic ingredient? The phrase "conversation pit." That's what turned the machine learner's designs into people-facing-each-other sofa masterpieces. It's not for sale yet, it's just strutting its stuff at an AI design exhibition and the Copenhagen Architecture Festival.
Bonus points: this bad boy is eco-friendly. No tools, no screws, and it's easy to transport. Who knew moving a sofa could mean popping it in the mail?

📰 News From The Front Lines: 📰
📚 RESEARCH 📚
This research paper introduces Orca, a 13-billion parameter model that learns to imitate the reasoning process of large foundation models (LFMs). Orca learns from rich signals from GPT-4, including explanation traces, step-by-step thought processes, and other complex instructions. The benefits of this approach are that Orca surpasses conventional state-of-the-art instruction-tuned models by more than 100% in complex zero-shot reasoning benchmarks. However, there are challenges faced by smaller models in enhancing their capabilities through imitation learning, such as limited imitation signals from shallow LFM outputs and small scale homogeneous training data. Additionally, there is a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style but not the reasoning process of LFMs. Overall, this research highlights the importance of rigorous evaluation in accurately assessing the capabilities of small AI models and presents a solution to the challenges faced by smaller models in enhancing their capabilities through imitation learning.
Pros:
Orca is a 13-billion parameter model that learns to imitate the reasoning process of large foundation models (LFMs).
Orca learns from rich signals from GPT-4, including explanation traces, step-by-step thought processes, and other complex instructions.
Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval.
Cons:
The quality of smaller models can be impacted by limited imitation signals from shallow LFM outputs.
Small scale homogeneous training data can also impact the quality of smaller models.
There is a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style but not the reasoning process of LFMs.
Limitations:
The research focuses on enhancing the capability of smaller models through imitation learning.
The evaluation is limited to complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and AGIEval.
Implications:
Orca presents a solution to the challenges faced by smaller models in enhancing their capabilities through imitation learning.
This research highlights the importance of rigorous evaluation in accurately assessing the capabilities of small AI models.
The approach used in this research could be applied to other areas beyond AI, such as education or training.
📼 Video Of The Day 📼
🛠️ Tools Of The Day 🛠️
ROI4Presenter - AI that makes a presentation like a human. Or better
Storly - Create your life story using AI that interviews you.
Archive - automatically detect and display the social media posts your brand is tagged in
Fadr - Generate AI royalty-free music for free
Gliglish - Improve your fluency and speak with confidence, with the first A.I.-based language teacher
Martekings - Exclusive AI solutions & guides to skyrocket your growth
🤌 Prompt Of The Day 🤌
Write a marketing campaign outline that uses [emotional appeal] to persuade [ideal customers] to take action and purchase [product/service]. For every section in the campaign give step-by-step instructions.
Emotional appeal = [Insert Here]
Ideal customers = [Insert Here]
Product = [Insert Here]






Thanks for tuning in to our daily newsletter. We hope you found our tips and strategies for AI tools helpful.
Your referrals mean the world to us. See you tomorrow!
Interested in Advertising on AIdeations?
Fill out this survey and we will get back to you soon.
DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. Please be careful and do your own research.