- AIdeations
- Posts
- States & Scams: Navigating the AI Frontier in Governance, Defense, and Cybersecurity
States & Scams: Navigating the AI Frontier in Governance, Defense, and Cybersecurity
From State-Backed AI Panels to Deepfake Dangers: The Race to Harness AIâs Promise and Prevent Its Perils


This edition of Aideations is a comprehensive look at how U.S. states are getting serious about AI ethics and regulation, the increasing sophistication of deepfake scams, the U.S. Air Force's development of AI-powered drones, and AI's potential impact on cybersecurity. Also included: insights on small businesses using AI, the future of education in an AI-driven world, and a peek into AI's role in fandom and other industries.
As always, feel free to reach out to me anytime at [email protected] and please continue to share this newsletter with everyone!
States Make Their Move: Unpacking AI's Role in Governance
AI's evolving relationship with state governments is under the spotlight: from ethical considerations to job markets and elections.
The New Face of Scams: Deepfakes' Rising Tide
The shift from robotic voices to deepfake tech in scams: an unnerving reality we can't afford to ignore.
AI in the Skies: The Air Force's Future Companions
A closer look at the U.S. Air Force's plans for AI-powered dronesâopportunities and ethical landmines.
AI's Dual Nature in Cybersecurity: Savior or Saboteur?
Explore how AI can be a boon or a bane in cybersecurityâmaking or breaking the system.
đ° News From The Front Lines
đ Tutorial Of The Day
đŹ Research Of The Day
đŒ Video Of The Day
đ ïž 6 Fresh AI Tools
đ€ Prompt Of The Day
đ„ Tweet Of The Day
States are Swiping Right on AIâHere's How They're Making Sure It's Relationship Material, Not a Total Creep!

A state government building illuminated with neon "AI" signage, medium: ultra-high-resolution photography, style: a blend of architectural photography and cyberpunk aesthetics, lighting: neon lights casting a futuristic glow on the building, colors: vibrant blues and purples contrasted with the dark night sky, composition: shot with a Canon EOS-1D X Mark III, EF 24mm f/1.4L II USM lens, Resolution 20.1 megapixels, ISO 100, Shutter speed 1/60 second --ar 16:9 --v 5.1 --style raw --s 750
States are waking up to the fact that artificial intelligence is not some sci-fi fantasy. It's here, and itâs like a powerful new roommate that you have to figure out how to live with. You know, will they clean their dishes or turn your life into an Orwellian nightmare? Some states are saying, "Before we hand over the keys, let's get to know you."
Minnesota is doing its homework like a student on an Adderall binge. They're diving deep into how AI could potentially play Big Brother with civil rights, especially when it comes to law enforcement. Then there's North Dakota, not wanting to be left in the tech dust, examining how AI might throw its hat in the ring for matters like job markets and even the 2024 elections. That's right, folks, Skynet could be a swing state.
Three amigosâConnecticut, Texas, and Washingtonâare forming what I'd like to call the Justice League of AI Panels. Their mission? To check if automated systems are pulling a fast one and discriminating against citizens based on race, gender, or religion. I mean, it's 2023; if my Roomba can avoid dog toys, we can program fairness into our algorithms, right?
Wisconsin also joined the club this week, with Governor Tony Evers issuing an executive order for an AI task force that sounds like it was named by a committee: "The AI Task Force Devoted to Labor Issues." Seriously, I've seen Twitter handles with more charisma.
Here's where it gets spicy. Lawmakers aren't just looking to put guidelines around state operations. Oh no, they're setting the stage for regulating the tech world, tooâeverything from algorithmic decision-making to those fancy language models that process data faster than I can say "pass the sriracha."
Connecticut Sen. James Maroney likens lawmaking to painting a house. No, really. He says task forces and research are the "prep work." Kinda like taping up the trim before going to town with that roller. It's a little less sexy but ensures you don't end up with a polka-dot living room. If you skip the prep, don't blame anyone if the end result is a messy wall of bad laws and unintended consequences.
The brass tacks? States are shaking off their hesitancy about blue-ribbon panels because, let's face it, the stakes are too high. We're not just talking about your Spotify playlists here, but issues that affect "liberty, finances, livelihood, and privacy interests." Essentially, states are saying, "We donât want to mess this up and end up in a 'wild, wild west' of unregulated tech."
Vermont, ever the overachiever, is already schooling us on how it's done. They've created a public inventory of the state's AI assets and even set up an Artificial Intelligence Advisory Council. Vermont is the friend who not only brings a tent to the camping trip but also remembers the bug spray and portable phone charger.
Whatâs the takeaway? If you're not thinking about how AI will affect your life, start now because the states are gearing up to make some real decisions. And remember, we're all in this roommate selection process together. Let's just make sure AI is more of a helpful, dishwashing type rather than the one stealing our privacy... or worse, our snacks.

From Free Cruises to Fake CEOs: How Deepfake Scams Are Fooling Even the Tech-Savviest Among Us

A tech-savvy individual sitting in a modern office, staring in disbelief at a computer screen displaying a deepfake video of a fake CEO, medium: ultra-high-resolution photography, style: influenced by the tension-filled compositions of Alfred Hitchcock, lighting: dramatic, high-contrast lighting to emphasize the individual's expression, colors: a muted palette to underscore the gravity of the situation, composition: shot with a Canon EOS 5D Mark IV, EF 85mm f/1.4L IS USM lens, Resolution 30.4 megapixels, ISO 200, Shutter speed 1/125 second --ar 16:9 --v 5.1 --style raw --s 750
Ah, remember the good ol' days when scam calls were just robotic voices claiming you'd won a "free cruise"? Well, welcome to Scamming 2.0, where hackers use deepfake tech to sound exactly like your boss, asking you to wire money ASAP.
Take the CEO of a British energy company, for instance. A couple of years ago, he got hoodwinked into sending âŹ220,000 ($249,000 for those of us who don't eat croissants) to a scammer who mimicked the voice of his German counterpart. Euler Hermes, the insurer, confirmed this as an AI-generated voice scam. I mean, if you can't trust your ears, what's left? Your gut instinct that always tells you to buy crypto right before it tanks?
As if voice deepfakes weren't enough, video deepfakes have entered the chat. Imagine jumping on a Zoom call and seeing what appears to be your boss. And this isn't some badly dubbed Kung Fu movie; we're talking AI-crafted, real-time deception. Mandiant, a cybersecurity firm, recently documented deepfake tech specifically designed for phishing. The cost? A measly $20 a minute. That's cheaper than my last therapy session talking about my trust issues, which are clearly well-founded.
But wait, it gets crazier. Udi Mokady, the big kahuna at CyberArk Software, was gobsmacked to find himself staring at his own deepfake in a Microsoft Teams call. The culprit? His employee, Gal Zror, who did it as a "Hey, this could happen to us" warning. Zror didn't just Google "how to make a deepfake;" the guy deep-dived into forums frequented by, get this, deepfake pornographers. The point? To show that if he could create a deepfake, anyone could.
Zror even presented his deepfake magic trick at a hacker conference, earning himself applause as he turned into the conference organizer, right there on screen. But let's face it, CyberArk is a small fry compared to giants like IBM or Walmart. Imagine tricking employee number 15,003 who might never even have met their CEO. Thatâs more dangerous than grandma mixing up salt with sugar in her apple pie recipe.
So, what's the moral of this Black Mirror episode we're living in? First, educate your team, especially if your team is the size of a small country. Second, add layers of security like it's a winter coatâtwo-factor authentication is your friend here. And finally, trust but verify. If you get a call from "me" asking you to invest in a llama farm, maybe shoot me a text first, eh? Unless, of course, you know that investing in llama farms has been my dream since 3rd grade. Then it's totally legit. đŠ

Top Gun Meets Terminator: Why the Air Force's New AI-Powered Drones Are Both Mind-Blowing and Terrifying

a squadron of futuristic AI piloted fighter jets flying through the sky, explosions on the ground, sunset, bomb --ar 16:9 --v 5.2
We're about to talk Top Gun, Skynet, and billions of your tax dollars, all swirling together in an aerial ballet choreographed by Uncle Sam and Silicon Valley. Yep, the U.S. Air Force is eyeing a hefty budget to build 1,000 to 2,000 AI-powered drone sidekicks. Think of these XQ-58A Valkyrie aircraft as Goose to your Maverick, if Goose was made of metal and had an affinity for "suicide missions."
We're not talking about some far-off fantasy; these bad boys are already in the works. Later this year, Valkyrie drones will be put to the test, simulating their own hunt-and-destroy missions over the Gulf of Mexico. "Talk to me, Goose!" Except Goose is an AI algorithm optimizing kill vectorsâkinda cool but also kinda terrifying, right?
These AI pilots aren't slouches, either. The Valkyrie drones can zoom around at 550 mph and soar at 45,000 feet, all while hitting a range of 3,000 nautical miles. And get this: each one's price tag could be as "low" as $3 million. Compared to a manned jet, that's like choosing between a Ferrari and a used Honda Civic. The Air Force is pretty serious about these robot wingmen; they're asking for a cool $5.8 billion over the next five years.
I remember playing with remote-control airplanes as a kid and thinking, "Wouldn't it be cool if this could fly itself?" Well, I got my wish, but the grown-up version is equipped with military-grade firepower and has been beta-tested alongside F-22s and F-35s. It's like your childhood fantasies got a Ph.D. in Aeronautical Destruction.
But hold your horses, Sarah Connor. There's a Skynet-shaped cloud hanging over this techno-utopia. Some folks are really, really worried that we're stepping into a "Terminator"-esque nightmare. Human rights advocates are sounding the alarm, saying we're crossing a moral Rubicon by letting algorithms make life-and-death decisions. Itâs one thing for AI to recommend your next Netflix binge; itâs another for it to decide who gets a missile up the tailpipe.
Even the United Nations is throwing shade. Secretary-General AntĂłnio Guterres has been on the record since 2019, saying these killbots are "politically unacceptable" and "morally repugnant." Ouch. That's not the kind of review you want on your Tinder profile.
So, what's my take? Proceed with caution. Tech like this could revolutionize warfare, save human lives on our side, and make military operations more efficient. But if we rush in without clear guidelines, we're flirting with chaos and a bucket load of ethical landmines. You don't have to be John Connor to know that handing over the kill switch to a machine should make us all pause and think.

From Cyber Hero to Possible Villain: How AI Could Either Save or Sabotage Your Online Security

A split-screen image featuring on one side a heroic AI figure in a superhero costume, and on the other side the same figure but with a villainous appearance, medium: ultra-realistic digital art, style: a blend of comic book art and cyberpunk aesthetics, lighting: bright and optimistic on the hero side, dark and ominous on the villain side, colors: vibrant reds and blues for the hero, dark purples and blacks for the villain, composition: shot with a Fujifilm GFX 100S, GF 32-64mm f/4 R LM WR lens, Resolution 102 megapixels, ISO 100, Shutter speed 1/125 second --ar 16:9 --v 5.1 --style raw --s 750
Ah, AI: the buzzy tech that's everywhere, from making your Spotify playlist to, uh, helping me write this sentence. Yeah, itâs literally writing the story about itself; how meta is that? But the conversationâs been heating up about AI and Large Language Models (LLMs) like ChatGPT potentially flipping the cybersecurity game on its head. No kidding. It's like assigning your puppy to guard the house. Cute and helpful, but there's a good chance things could get messy.
Now, donât get me wrong. AI could be the fairy godparent cybersecurity has been waiting forâmagically drafting and scanning code, real-time threat analysis, and relieving overworked, under-caffeinated security experts from the grunt work. Imagine swapping those mind-numbing Excel sheets for some good ol' human problem-solving. More time for company ping pong tournaments, anyone?
But let's pump the brakes. AI and LLMs are still the awkward teenagers of the tech world. Theyâve got potential, but theyâre also a liability. Turns out, the same tech that could boost cybersecurity can be co-opted by the baddies, the online villains. Picture a cat burglar using your own ladder to sneak in through the second-floor window. Not so convenient now, is it?
Now, I hear you asking, âBut how bad could it be?â Well, get this: Malicious types are already using AI to spin off new variants of harmful codeâimagine replicating a webshell attack like itâs going out of style. And trust me, theyâre gonna keep iterating faster than you can say "zero-day vulnerability."
And if you thought that open-source is the magic potion, think again. A lot of the baddies are using open-source LLMs to automate their evil deeds. It's like offering a free getaway car with a map to the bank vault. Before we know it, we'll see a boom in zero-day attacks, and thatâs a day no oneâs looking forward to.
So what's the playbook here? No silver bullets, but how about fighting fire with fire? Just like the bad guys, organizations should use AI to scan their code for security holes faster than you can find plot holes in a soap opera. If youâre pulling in code from the cyber realmâs equivalent of a public library, make sure youâre not also importing its malware history.
The key takeaway? Whether you're an Elon Musk fanboy or think AI is the beginning of Skynet, this tech isnât going back in the bottle. Itâs like learning to live with a roommate who sometimes eats your food. You've got to set boundaries. And folks, the industry bigwigs are echoing the same thing. An "AI pause" might be on the horizon, but for now, letâs be smart about this toolâbefore it's smart enough to be dumb about us.


Learn LangChain: Thank Me Later


Title: Global Planning for Contact-Rich Manipulation via Local Smoothing of Quasi-dynamic Contact Models
Authors: Tao Pang, H.J. Terry Suh, Lujie Yang, and Russ Tedrake
Executive Summary:
The paper discusses a new approach for solving problems in robotics where contact with objects is involved. This is a challenge because robots have to deal with things like how hard to grip something or how much force to use. The authors use something called Reinforcement Learning (RL) to train robots. But instead of using RL alone, they also use models to understand how objects react when touched or moved. This combo helps robots plan better and be more efficient in tasks like picking things up or moving them around.
Pros:
The method is a mix of data-driven and model-based techniques, making it versatile.
Can handle complicated tasks that involve a lot of touching and moving objects.
Potentially makes robots smarter and more efficient at tasks.
Cons:
The method might be hard to implement in simpler robots.
It may not work well in all types of situations or environments.
Use Cases:
Industrial Robots: For tasks like sorting items on a conveyor belt or assembling parts.
Household Robots: Could be used in robots that help with chores like cleaning or cooking.
Specialized Tasks: Such as robots used in healthcare for rehabilitation exercises.


Salssy - The #1 conversational Linkedin Sales Agent.
Airparser - Revolutionize data extraction with the GPT parser. Extract structured data from emails, PDFs, and documents. Export the parsed data in real time to any app.
Altero - Team of AI agents can search the web and synthesize data from various sources to produce market research reports. Get up to speed on any company and streamline your due diligence.
Levi - Create stunning and professional websites in seconds.
Layla - Personal AI that resides directly on your phone or device. Layla goes beyond standard AI functionalities and evolves alongside you, becoming your ultimate personal assistant.
Triplay - Craft a one-of-a-kind travel plan tailored to your preferences. Alongside a list of well-known attractions like museums, parks, and monuments, youâll also receive exclusive choices.

Todays Prompt comes from our friends at Prompts Daily
Create Your Elevator Pitch:
Analyze [product]. Help me create a concise and compelling elevator pitch that will effectively communicate the value of my offering.
Product = [Insert here]



ChatGPT Prompts to help improve your Marketing Game through Mental Models
[bookmark this for later reference]
1. Zeigarnik Effect
"Please write a marketing campaign outline that utilizes the Zeigarnik Effect to create a sense of unfinished business or curiosity among [ideal⊠twitter.com/i/web/status/1âŠ
â Anurag Agarwal (@Anurag_Creates)
2:37 PM âą Aug 27, 2023
