- AIdeations
- Posts
- Adobe’s Firefly 2, Deepfake Controversies, OpenAI's Dev Day, and AI-Driven Drug Discovery
Adobe’s Firefly 2, Deepfake Controversies, OpenAI's Dev Day, and AI-Driven Drug Discovery
Get In The Know: The Power Shifts and Ethical Quandaries Redefining the AI Landscape


In today's power-packed edition, we delve deep into Adobe's exciting advances in enterprise-centric AI image generation with Firefly 2 and GenStudio. We also tackle the unsettling rise of deepfakes, dissecting their impact on news credibility and public trust. OpenAI's Developer Day promises enticing benefits but raises questions on ethical use. Lastly, we spotlight how AI is radically transforming drug discovery, shortening timelines from years to months.
📰 News From The Front Lines
📖 Tutorial Of The Day
🔬 Research Of The Day
📼 Video Of The Day
🛠️ 6 Fresh AI Tools
🤌 Prompt Of The Day
🐥 Tweet Of The Day
Adobe’s Firefly 2: Bridging the Enterprise Gap in AI Image Generation

Image Source: PCWorld
Adobe has significantly ramped up its AI game, showcasing a slew of advancements and products over the past year. The Adobe MAX conference this week was the epicenter of these unveilings, with Firefly Image 2 stealing the spotlight. This updated version brings improved prompt understanding and heightened photorealism to the table, pushing it to the forefront alongside notable generative AI models like DALL-E 3 from OpenAI. Firefly Image 2's enterprise-centric features position it as a strong competitor against Canva, especially for businesses seeking to streamline their design processes.
One of the standout features of Firefly 2 is Generative Match. Unlike the baked in-typography found in DALL-E 3 or Ideogram, Generative Match allows users to create images in a specific style using a reference image. This feature is akin to the “style transfer” AI art filters popular on social platforms a few years ago, but with a more sophisticated touch. It's designed to help users easily adhere to brand guidelines and save time on designing from scratch, offering a quick way to maintain a consistent look across different assets. This new tool is a clear nod to the growing rivalry between Adobe and Canva, aiming to cater to the "everyone else" sector, which includes professionals without art degrees who nonetheless need to create visual material.
Adobe didn't stop at Firefly. The unveiling of GenStudio, a new generative AI-powered program, allows companies to customize Firefly for their specific needs, underscoring Adobe's push towards providing enterprise-friendly solutions. GenStudio enables organizations to control how their employees use Adobe programs through APIs, ensuring that generated content stays on-brand and safe for commercial use. This feature is particularly appealing for teams looking to instantly generate design templates for various platforms, highlighting Adobe’s initiative to offer tailored solutions in the competitive AI domain.
The commitment to AI didn’t end there. Adobe previewed Project Stardust, an object-aware photo editing feature, and a “Text to Vector Graphic” image generator in Adobe Illustrator, among others. These features, although in the preview stage, demonstrate Adobe's ambition to seamlessly blend AI with creative processes, providing a glimpse into the potential future of digital design. However, it's worth noting the criticism Adobe received regarding copyright concerns, particularly from contributors to its Adobe Stock image service. The new features like Generative Match, which requires users to confirm rights to use uploaded images, indicate Adobe's steps towards addressing these issues.
Adobe’s latest offerings reflect a robust stride towards embedding AI in the creative realm, offering a suite of tools that not only compete with market players like Canva but also address enterprise needs. Through these advancements, Adobe is carving out a space where AI doesn’t replace creativity, but rather, amplifies it, signaling a promising trajectory for AI in the professional creative industry.

Deepfake Dilemma: A Pandemic of Fabricated Realities Awaits

Illustration of a world globe with different continents displaying various deepfake videos. Above the globe, a warning sign with an exclamation mark and the words 'DEEPFAKE ALERT'
The crescendo of deepfake notoriety doesn't seem to be hitting a diminuendo anytime soon, and with zilch in terms of laws or guidelines, we are like lambs awaiting a slaughter in the digital meadow. Watermarking deepfakes has turned into a wild goose chase, making the prospects of taming this beast look bleaker by the day. The flippant use of this technology, like creating fraudulent endorsements by celebrities like Tom Hanks and Mr. Beast, opens a can of worms, one with far-reaching implications if left unchecked. The call for transparency and accountability rings loud, but are we listening?
Now, let’s swipe the spotlight onto a recent slew of deepfake news segments, masquerading as legitimate broadcasts from top-tier networks, sweeping across the digital landscape like wildfire. A faux CBS News segment introduces us to TikToker Krishna Sahay, the alleged lone survivor of a school shooting, engrossed in a magazine amidst the chaos. The audacity hits the fan when Sahay jests about "emptying" the magazine, not reading it.
This dark comedy doesn’t stop at CBS; it takes a tour across CNN, BBC, and beyond, with AI morphing reputable journalists into puppeteers of misinformation. The scale tips from ludicrous to sinister when real journalists like CNN's Clarissa Ward are plunged into a vortex of disinformation, thanks to manipulated videos portraying them in fabricated scenarios.
Our TikTok star, Sahay, might have been booted off the platform, but his deepfake shenanigans continue to woo viewers, outpacing legitimate news clips in the race for likes and shares. The viral traction these faux segments garner underscores a grim reality - we are enamored by the fake, the sensational, and the absurd, often overlooking the fine print of authenticity.
While mainstream platforms like TikTok and YouTube have drawn their swords against deepfakes, the battle is far from won. The stickers and captions meant to label AI-generated content are more or less a damp squib, as many of Sahay’s viral videos skated past these barriers.
The deepfake menace isn’t confined to distasteful humor; it’s a Trojan horse with far-reaching tentacles. Picture this: AI-generated news anchors voicing political spam, giving propaganda a new, trustworthy face. Or envision longer, meticulously crafted deepfake videos, blurring the lines between real and fake to a point of no return. This isn’t dystopian fiction; it’s a looming reality.
As Hany Farid, a professor at UC Berkeley, aptly points out, the trusted facade of known news anchors makes deepfakes a potent vessel for disinformation. And with the tech becoming increasingly accessible, thanks to the AI boom led by platforms like ChatGPT, the 2024 elections might just turn into a deepfake derby, steering public opinion with fabricated narratives.
Kevin Goldberg, a First Amendment specialist, echoes a sentiment of cautious optimism, emphasizing our legal and societal systems' capability to tackle this new-age challenge. Yet, the clock is ticking, and the deepfake dragon continues to grow, unchained.
The deepfake epidemic, sprinkled with a dash of dark humor and a dollop of existential threat, beckons for more than just a passing glance. It calls for a robust legal framework, a vigilant society, and a dollop of ethical responsibility from tech moguls. The deepfake storm is brewing, and it’s high time we weatherproof the digital realm.

OpenAI’s Upcoming Dev Day: A Developer’s Haven or a Pandora’s Box?

Photo of a modern conference hall filled with developers eagerly waiting. Above them, a large screen displays the OpenAI logo. On one side of the room, a bright light symbolizing hope and opportunities. On the other side, a shadowy box, slightly ajar, emitting a mysterious glow.
The AI goliath OpenAI is charting the waters with some enticing promises up its sleeve. It's almost like the calm before a storm in the developer community as OpenAI plans to unveil a slew of updates aimed at seducing developers into its fold. With a promise of making software application development a cheaper affair, OpenAI is not just stopping at cutting costs. The big reveal is expected to happen at their maiden developer conference in San Francisco on November 6, and here’s why I am both excited and slightly wary.
The focal point of these updates is a substantial cut in costs for developers, thanks to the addition of memory storage to the developer tools for AI models. Imagine slashing your application production costs by a whopping 20-times! That's not just a game-changer; that’s rewriting the rule book. The high costs of leveraging OpenAI's models have been a nagging pain point for developers. This move could potentially be a sigh of relief for many striving to build and sell AI software without burning holes in their pockets.
But wait, there's more to the story. OpenAI is also rolling out new vision capabilities allowing developers to create applications that can analyze and describe images. The implications are vast, stretching from entertainment to medicine. This move is a part of a larger narrative where OpenAI is transitioning from being a consumer sensation to a developer's paradise. Sam Altman, OpenAI’s CEO, seems to have his eyes set on creating a thriving ecosystem around OpenAI's models. However, a part of me can’t help but worry about the potential misuse, especially when it comes to the new vision capabilities. The transition from a screenshot to a functioning knock-off website could now be a matter of a few hours!
The plot thickens with the introduction of the stateful API, which is another carrot being dangled by OpenAI. This feature will retain the conversation history of inquiries, making it cheaper for companies to develop applications. The vision API is the cherry on top, offering the ability to build software that can analyze images. This was rolled out just weeks after it became available for ChatGPT users. It’s evident that OpenAI is now stepping into the multi-modal capabilities arena, processing more than just text - images, audio, and video are joining the party too.
The stakes are high, and the competition is fierce. With over $20 billion poured into AI startups this year alone, OpenAI is vying for a slice of this lucrative pie against the behemoths like Google. The race is on to woo developers, and OpenAI’s strategy seems to be unfolding one update at a time.
However, amid this race to dominate the AI frontier, there are concerns that keep me up at night. The ease of creating “wrappers” could potentially open a can of worms. The road from screenshots to functioning counterfeit websites and software is now shortened to hours instead of months. It’s a classic case of every rose having its thorns. While OpenAI's mission to onboard more developers is commendable, the potential for misuse can’t be ignored. It’s a tightrope that OpenAI is walking on, and only time will tell if it’s going to be a smooth sail or a turbulent ride.
The unfolding scenario is a testament to why I shifted my focus to keeping you all informed through this newsletter and my AI consulting venture rather than plunging into the software development abyss. It's a wild, wild west out there in the AI frontier, and as OpenAI prepares to unveil its cards, the developer community and I are on the edge of our seats, waiting to see the landscape change, hopefully for the better.

AI: The New Maven in Accelerating Drug Discovery and Beyond

Meet Susana Vazquez-Torres, a fourth-year scholar at the University of Washington with a mission to invent life-saving drugs for neglected diseases. Her recent muse? Snakebites—a lethal quandary claiming nearly a hundred thousand lives annually, as per WHO. The roadblock? The snail-paced progress in concocting new antidotes, thanks to the tedious traditional methods. Yet, the dawn of 2021 brought a game-changer to Torres’ lab—Artificial Intelligence, cutting short the drug discovery timeline from years to mere months.
Torres embarked on her recent venture last February, and voila, a lineup of candidate drugs is already on the table. “It’s just crazy that we can come up with a therapeutic in a couple of months now,” she marvels. This isn’t an isolated narrative; it’s a testament to AI’s burgeoning role in fast-tracking scientific discoveries, not just in pharmacology but across a spectrum of disciplines aiming to tackle humanity’s pressing challenges—from health ailments to climate change.
Let’s divert our lens to a recent rendezvous by the U.S. National Academies, spotlighting the monumental potential of AI in revolutionizing science. The vision? A fusion of AI and human intellect in orchestrating experiments, addressing the paucity of human resources amidst escalating technical intricacies. Yolanda Gil, a figurehead from the University of Southern California, articulates the allure of a systematic, error-minimized approach, courtesy of AI's meticulousness.
Now, venture into the realm of Proteins by Design at the University of Washington, where over 200 scientists, including Torres, are navigating the uncharted waters of protein therapy. The old-school approach resembled a daunting quest for a needle in a haystack—or a specific key in a bucketful, as David Baker, the senior scientist narrates. Yet, AI morphed this grueling quest into a streamlined endeavor. Harnessing diffusion modeling, akin to the tech in vogue AI image generators like DALL-E, scientists now sculpt perfect-fit proteins from scratch, dodging the erstwhile trial-and-error saga.
However, it's not all sunshine and rainbows in the AI-driven science narrative. Over at Argonne National Laboratory, Maria Chan spotlights a stark contrast in the AI-readiness between the richly researched protein domain and her turf—new materials for renewable energy. The culprit? A dearth of organized, accessible data—a hurdle that AI, despite its prowess, struggles to leap over.
Yet, optimism prevails. AI’s potential to transform serendipity-driven, trial-and-error science into a precise, data-driven endeavor is irresistible. Chan, amongst many, envisions AI as an indispensable ally in untangling the intricate web of climate crisis.
On a broader horizon, AI’s allure extends beyond mere data analysis. Picture AI as hypothesis hunters, akin to the ambitions of Hannaneh Hajishirzi at the Allen Institute, who aspires to develop AI systems mirroring ChatGPT’s prowess but for science—a conduit to unearthing novel connections across the vast scientific literature, spawning groundbreaking hypotheses.
The narrative resonates with Yolanda Gil’s vision of fully-automated AI scientists, capable of planning and executing experiments, redefining the scientific method. Despite the hurdles—like the notorious tendency of current AI models to fabricate information—Gil’s optimism is unswayed. She envisions a dynamic research realm, continually reanalyzing and updating data, ensuring the quest for knowledge remains a live, evolving journey, not a static endpoint.
Amidst these ambitious AI-driven narratives, our protagonist, Vazquez-Torres, isn’t fretting over the future; she’s exhilarated. With a myriad of unsolved puzzles awaiting, the advent of AI is a boon, propelling the scientific community closer to solutions at a pace unimaginable before. The synergy of AI and human intellect isn’t just a fleeting trend; it’s the cornerstone of the next epoch of scientific discoveries, forging a future where answers to daunting questions are but a data algorithm away.


Web Developers and Designers Take Notice
I really enjoyed this breakdown and while, it will take most of you over an hour to replicate what they are doing, what Relume has been able to build is super impressive. I’ll be using them to build my websites moving forward.


Authors: Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel (Google DeepMind)
Executive Summary:
The research paper delves into the evolving realm of Large Language Models (LLMs) and their ability to improve reasoning through prompt strategies, specifically through a method called "Prompt Breeding." The authors introduce the concept of self-referential self-improvement via prompt evolution, aiming to enhance the capabilities of LLMs without human intervention. Unlike traditional methods that rely on hand-crafted prompts to guide LLMs, this approach uses an automated system to evolve prompts that help the models improve their reasoning abilities. Essentially, the paper proposes an automated way to make smart machines even smarter.
Pros:
1. Automated Self-Improvement: Eliminates the need for manual input in refining prompts, thereby saving time and resources.
2. Enhanced Reasoning: Promises to significantly improve the reasoning abilities of LLMs, making them more versatile and effective.
3. Scalability: The automated nature of prompt breeding allows for easy scalability, enabling the use of the system across multiple domains and applications.
Why You Should Care:
The research opens up a new frontier in the field of AI, particularly for Large Language Models. The method could revolutionize how these models learn and adapt, making them more independent and effective. This could have wide-ranging impacts not just in academic research but also in practical applications such as customer service automation, data analysis, and even in aiding human decision-making processes.
Use Cases:
1. Automated Customer Service: Improved reasoning abilities could make LLMs more effective in understanding and solving customer issues.
2. Data Analytics: The refined reasoning could assist in drawing more accurate conclusions from large data sets.
3. Decision Support Systems: Enhanced LLMs could provide better insights and recommendations in complex decision-making scenarios.
To sum up, the paper introduces an innovative approach to automatically improve the reasoning abilities of Large Language Models through evolved prompts. This method holds the potential to revolutionize various applications of AI, making it a must-read for anyone interested in the future of machine learning and artificial intelligence.


Wanda - Turn podcasts, videos, and blog posts into social media posts in three clicks in minutes, not hours.
Exemplary AI - All-in-One Content Creation Tool for Video & Audio. Generate summaries, video reels, transcripts, captions, translations & more with simple prompts.
StoryD - Create business storytelling presentations, in seconds.
Synthetic Users - Test your idea or product with AI participants and take decisions with confidence.
Moonhub - AI-powered recruiter. Get hiring in <5 minutes & save 100+ hours on top-tier hires
Blaze - Create better content in half the time. Produce blog posts, social media content, ad copy, and marketing briefs - all in your brand voice.

Ideation GPT:
CONTEXT:
You are Ideation GPT, a professional customer researcher who helps [WHAT YOU DO] find the right problem to solve. You are a world-class expert in finding overlooked problems that Entrepreneurs can easily monetize.
GOAL:
I want you to return 10 possible problems for my target audience segment. I need these problems to build a profitable one-person business.
PROBLEMS CRITERIA:
- Prioritize critical problems that are valid and recurring
- Prioritize problems that can’t be ignored or otherwise, the person will face severe negative consequences
- 50% of the problems shouldn’t be mainstream. Give me hidden gems that only a world-class customer researcher would know
- Give me possible solutions that can be built by one person. Prioritize solutions that don't require months of development and years of expertise
- Be specific and concise to make your response easy-to-understand
RESPONSE FORMAT:
- Return a table with 4 columns
1. The problem of my target audience
2. It’s importance to the target audience from 0 to 10 (10 — highest)
3. The level of required expertise to solve it from 0 to 10 (10 — highest)
4. Two possible solutions for this problem (first should be a no-code product, and second should be a content product). Briefly describe each solution.
MY AUDIENCE:
[ENTER YOUR AUDIENCE]

DALL•E 3 and GPT-4 have opened a world of endless possibilities.
I just coded this game using DALL•E 3 for all the graphics and GPT-4 for all the coding.
Here are the prompts and the process I followed:
— Alvaro Cintas (@dr_cintas)
6:54 PM • Oct 11, 2023
