AI and its principle concepts have been around throughout my career. I never thought I would wrestle with the “Theoretical AI” directly, tussling through a time of rapid change and increasing personal clarity. I never expected to stop trying to get customers, just as I’m getting going, to get a self-defined policy written first.
Over the 28 years of my digital and entrepreneurial journey, I’ve been involved in 6 startups, including co-founding two. Perceptint, my current consulting practice, was founded over ten years ago as a personal career dojo.
It’s where I’ve pushed my thinking and stretched into some fantastic work that’s always led to the next. In January, my role was eliminated. The startup I was with is making the stretch to break even and their next funding round.
This was always a possibility. I want them to succeed and hope they will. I bring it up for context on how I found myself back here, assessing my next career phase.
Over the past five years, I’ve been fortunate to be exposed to some extensive AI enabled projects that I learned more from than many of the projects from the 20+ years before. AI is not some magic you can sprinkle over a business and make it pixie-like better.
So how did we get here? I can only tell it through my lens.
I was aware and interested in the concepts of artificial intelligence, robotics, and assisting agents my whole life. I knew the sci-fi tropes around things like The Turing Test, used famously in Blade Runner to tell the difference between a “Replicant” and a human. CNN picked up one of my early articles in 2001 about the marketing for the movie A.I. Artificial Intelligence because of the growing interest in the “futuristic” tech.
The AI I know today is radically faster, more advanced, capable, and frankly more dangerous than the nuclear bomb. Not because AI will ultimately turn on us and remove humanity as one of the problem variables for the planet’s proper eco-balance.
It’s because if businesses run head first into these tools as they did in adopting the commercial web and all the progress since, they will hit a collective atomic split and explode from the weight of years of laziness in data cleanup, misunderstanding customer needs, their lack of hubris, and a complete lack of planning for change.
As a business leader who jumped into entrepreneurship at the whiff of the commercial internet 28 years ago, I’ve learned a lot of lessons. Lately, I reflect on the current state of advances and pause longer.
Take me way back…
I remember reading (and recently re-reading) the Cluetrain Manifesto in April of 1999 and feeling it perfectly captured the day’s spirit. As part of the first wave of pioneers in the digital world, I was a few months from closing the first startup I co-founded.
We had reached a point in our digital desert, Bend, Oregon, where recruiting engineers (the hottest capital around in ‘99) meant doubling or tripling our services revenue if we could afford one talented developer. The salaries were doubling and tripling for them in San Francisco, Portland, and Seattle. So we closed the business, and I moved to Seattle, working with Microsoft, MSN, and several other tech and large organizations.
Since then, I’ve worked with or for Kaiser Permanente, Meijer, Blue Cross Blue Shield Michigan, Mentavi Health, and several other organizations. Over the past five years, I’ve worked on several AI-driven or influenced projects.
In 2020, I worked hands-on with my first solution, mixing behavioral science, AI, and omni-channel messaging. We were given the underdog task: to take the worst-performing category of customers without contact information and see if we could upsell them to a new health plan. The incumbent agency we were competing with had 80% more budget, email lists, richly defined segments, and a robust channel mix, including lots of direct mail.
Picking the right Medicare plan is a hugely stressful life event that gets more complicated yearly. Our target was people approaching eligibility to switch to Medicare who had been in group plans from work most of their lives. When they turned 65, they had to pick an individual plan from Medicare or a Medicare Advantage partner.
So how did we do with our throwaway audience with no contact information? We outperformed the incumbent agency by 500% in the first 90 days.
It turned out that the AI-informed targeting, matching health claims from providers and pharmacies (how much care they needed or avoided) and where they were on the poverty line (above or below), along with some behavioral science, outperformed years of marketing persona work, segmentation, and “personalized” campaigns, etc.
And it kept getting better. As I reflect on it now, without telling anyone we were doing it, we started manipulating people to act on their health (in their best interest) without telling them.
I ask myself now, “Why didn’t we involve them and show them what we were doing for “their benefit”?”
Intro to GenAI
I started using Generative AI tools like Dalle-2 in the summer of 2022 to see what was possible. To think where we are today in March 2024 is hard to imagine. Here are two images generated using the same prompt.
Dalle-2 – June 17, 2022
Dalle-3 from March 7, 2024
I was excited when I saw how good I thought Dalle-2 images looked. A little over 600 days later, the quality keeps improving. Now, with Dalle-3, the level of detail, options, and image styles it can produce is staggering. It’s better with text and improving all the time. Midjourney, Stability, Leonardo, and others are adding more robust features by the week.
Keeping up is almost out of reach.
Pika excited everyone when they got to a 3-second video from a text prompt. That was released just a few months ago. Then OpenAI previewed SORAs 60 second video from text prompts coming any day. As someone used to moving fast, working agile, and optimizing businesses for a living, I find it strange to say this with such emphasis.
WE MUST SLOW DOWN.
I restarted Perceptint in January and started talking to people actively about their digital journey. I kept finding a gaping hole threading through conversations.
I’d ask, “Are you using AI today?”
Most would say, “Yeah, tools like ChatGPT and Bard.”
“Do you have any guidance from your company on how to use it?”
In hushed tones, “I don’t think they know I’m using it yet.” was whispered too often.
At a recent panel I spoke on, half of the business owners said they already used AI tools. When I asked for a show of hands for how many had best practices or any guidelines on usage in place, not a single hand went up.
As the hands went down, my gut confirmed what my brain was already thinking. I’ve got to get to work. This led me to fast-track a new priority: researching and writing my Guidelines for Responsible AI & Business, the first policy document for my consulting practice. As I shared drafts, it became clear that others needed it too.
The U.S. Government’s NIST AI Risk Management Framework laid out its recommendations on how businesses should approach AI in January 2023, well before most companies seriously considered AI. An exhaustive framework for progress was laid out through a partnership with the leading players in AI (including OpenAI) and government standards leaders.
NISTs Framework, a phenomenal work, was released in late January 2023. Chat-GPT 3.5 was announced to the public maybe a week before and began monetizing just a few weeks later. Even with the current lawsuit(s) in play between several AI titans, it’s clear decisions were made to consciously sprint ahead of any economy scale safety measures to monetize their code and reach their original AGI goals.
Do we need AGI before we need public safety?
Do you go from sticks and stones to laser swords with nothing in between?
If we’re sneaking the party a few blocks before our parents get home, we’re making unsafe decisions and acting like reckless teens. If that doesn’t fit the profile of one of the most significant insider trading decisions ever, I’d welcome other examples.
The challenge we’re up against now is that the world is embracing these toolsets and ways of thinking that AI will magically solve everything.
OpenAI’s announced new Board positions this week; none are data scientists or ethicists. They are all experts in commercialization. Commercializing ahead of safety and regulation. Make the money now and clean up the mess later. Sorry, it can’t go down like that again.
That’s why we need a collective conversation NOW and why a document I started researching and writing four weeks ago is catching network attention.
After NIST’s framework has been out for over a year, only the largest tech companies and few others have formal guidelines for thinking about AI, the inherent risks of bad data, or how to evaluate potential risk scenarios and their outcomes.
According to Statista, between January 2021 and December 2023, $164,400,000,000 (BILLION) was invested globally into AI startups, creating hundreds and thousands of proliferating startups today.
How many of them are deeply, genuinely concerned about your data? Your privacy? Your security? If they follow the first and second wave of digital adoption (the web and then social/mobile), WE WILL ALL be the product.
Their dominion over our lives, our footprint in the world, our loved ones, and the things we treasure will be handed over to them like passengers asked to jump off a pleasure cruise we booked and paid for. We don’t own the boat, so there is no choice now. 🤔
As a technologist and human, this must be addressed. We know these technologies need guard rails. We cannot proceed with abandon like we did before, following our enthusiasm down the dark hole before we catch the light.
So I’m starting with myself and my close network. We feel called to duty like the rebel alliance gathering to fight tyranny. From here, we’ll use our network effect, marketing skills, and an ongoing push to spread this far and wide. Open source movement, here we come.
And now, without further ado, here’s v1.5 of the working document, Perceptint’s Guidelines for Responsible AI & Business. Here’s what it covers:
- Content Authenticity Statements
- Goals, Purpose, and Applications where AI will be used
- Data Management
- Ethical AI & Responsibility
- AI Governance Structures
- Privacy & Security
- Stakeholder Engagement – Who and How
- Monitoring AI Performance & Evaluation
- AI Types being tested
- Specific AI Applications
- Additional Considerations like:
- Data Bias & Inclusivity
- Sustainability
- Transparency & Explainability
- Legal & Regulatory Compliance
- Impact Assessments
- Feedback & Continuous Improvement
If you or your organization is interested in adopting this guideline framework, please contact me. In the next few weeks, I’ll have a page listing the organizations using it. I’m currently working with a small group to create a templated version that will be available online