AI and Being Brave Enough to “Fail Fast” and “Fail Forward” 4-2-25

This week is about AI and being brave enough to “fail fast” and “fail forward”.  I dedicate this to Jax Penso and our tech team.

A few months ago, Jax approached me with an idea.  She had a complicated dataset filled with quant and qual data.  She wanted to explore if AI could help dissect, analyze, and make connections faster and more effectively than humans could.  I loved the vision and the willingness to experiment.

I told Jax that what she was asking was going to be incredibly difficult, and no one (in our company or beyond) had quite figured out how to do this with AI yet.  Jax wasn’t phased.  She was willing to explore, so we got a team together and started working.  The team acknowledged that what we were setting out to do would be incredibly difficult.  We knew that there was a good chance that things would not work out perfectly. We also knew that by pushing the boundaries of what was possible we would learn a lot.  In the end, our prototype didn’t meet our lofty expectations.  HOWEVER, we gathered a lot of learnings along the way. 

Let’s make connections.  How often do you hear people talk about being brave enough to “fail fast” or “fail forward”?  I’m sure you hear it a lot.  Those are some of the all-time most popular corporate buzz words.  Now, how often do you see people who are truly brave enough to fail fast or fail forward?  That’s a rarity.  It would have been easy for Jax and the team to decide not to pursue the project, once they understood that the chances of perfect success was slim to none.  It would have been easier, BUT it would not have been as fruitful. 

Imagine how different work would be if more folks acted like Jax and the team?  At the end of the day, the team didn’t fail.  They learned.  The team didn’t fail, they created a new foundation.  Pursuing the impossible led us to exploring new technologies and approaches.  It also challenged us to rethink and better embrace the Agile mentality.  We didn’t make it to the summit, but other folks will be able to leverage our learnings to get them a head start.

The challenge: Are you brave enough to explore, even when you’re not 100% sure how it will turn out?  Are you one of the rare people brave enough to “fail fast” or “fail forward”?

Have a jolly good day,

Andrew Embry

iPhones, AI, and Embracing a Product Mindset (3-26-25)

Last week was about AI, tools, and expectations.  I want to dive deeper into this area by exploring iPhones, AI, and adopting a product mindset.

It’s hard to believe that the first iPhone came out in 2007.  I think I still had a Motorola Razr phone then. 😉  The first iPhone launched with a 3.5 in LCD screen, basic camera, and maximum storage of 16GB.  Today’s iPhone 16 Pro has a 6.7 in Super Retina XDR display, triple camera, and a maximum storage of 1TB.  That is a MASSIVE tech evolution. 

This didn’t happen by accident.  This happened because of the choice Apple made.  Apple had the opportunity to choose between two things.  They could either launch the iPhone fully knowing it was good but not perfect, or they could make the iPhone perfect before they launched it.  If they would have tried to make the iPhone perfect, it never would have launched.  Instead, they were brave enough to embrace a product mindset.  They were willing to launch version 1, fully knowing they would have to evolve it over time.

What does this have to do with AI or anything else?  I’ve been leading and working on various AI initiatives over the past couple of years.  There is ALWAYS the pressure to make something perfect before rolling it out.  There is ALWAYS the fear that the AI solution won’t instantly meet all of the audience’s needs.  This pressure and fear often leads to never delivering something tangible, because we are scared to not be perfect.  Have you ever felt this about any of your work?

Similar to Apple, I’ve had to work to embrace the product mindset.  This has been a shift for me.  I’ve had to learn to accept that something that is delivered with room to grow is SUPERIOR to something that never gets off the shelf because it’s waiting to be perfect.  I’ve had to learn to embrace all the feedback and questions that come with a version 1 as fuel for growth vs criticism about my shortcomings. 

The challenge- How can you embrace more of a product mindset?  Will you be strong enough to embrace that delivered with room to grow is better than something that never gets off the shelf?

Have a jolly good day,

Andrew Embry

AI, Tools, and Expectations

Last week was about my kids using ChatGPT and not limiting our thinking.  This week is about AI, tools, and expectations.

Let’s say that you needed to assemble something, so you grab a wrench out of your toolbox.  You use the wrench to fasten the nuts and bolts.  Then, you realize there are screws you need to insert.  Your wrench won’t be able to insert the screws.  Does this make the wrench bad?  Would you throw the wrench away, because it wasn’t good at solving this challenge?  I’m guessing you wouldn’t.  I hope you’d recognize the value and the limitations of the wrench, and every other tool in your arsenal.

Let’s connect some dots.  We should apply this same thinking to AI.  I’ve been in conversations exploring different AI tools and heard people say, “It can’t do X, so I don’t know if it’s any good.”  Have you ever heard someone say something like that?  This would be like saying, “This wrench doesn’t work for every single situation, so that means wrenches are bad.”  It’s true that the tool couldn’t do X.  However, the tool could do A, B, and C and get you 70% of the way there in minutes vs the weeks it would take you to do this manually.  That is powerful.  That is valuable.

Whether it’s wrenches and screwdrivers from a toolbox or types of AI applications, it’s important to have the right expectation for each tool.  We don’t expect a wrench to be perfect and solve all problems.  Instead, we understand we need a variety of tools to be successful.  In a similar way, we shouldn’t treat AI as if it is just one tool.  AI spans a variety of tools and use cases, each with their own benefits and limitations.

The challenge: How will you properly set expectations for various AI tools?

Have a jolly good day,

Andrew Embry

My Kids, ChatGPT, and Not Being Limited in our Thinking (3-12-25)

This week we are going to kick off a new series focused on things I’ve learned about AI over the past almost 2 years in my role leading various AI initiatives.  This one just so happens to be about my kids, ChatGPT, and not being limited in our thinking. 

Shortly after ChatGPT was launched, I introduced my kids to the technology.  While they may not have understood what a large language model was or how it worked, I helped them understand the role it could play.  Essentially, I told them it was like an assistant for them to use to explore ideas. 

A couple of weeks later, we were sitting at the dinner table and I asked everyone what they had done that day.  My kids explained to me how they created a new game with ChatGPT.  I was shocked by this and asked them to tell me more.  They explained how they told ChatGPT that they wanted to play a game inspired by their favorite cartoon, Owl House, which included epic battle against evil villains and took about an hour to play.  With this prompt, ChatGPT created the rules, plot, and setting for their game.  I asked how they came up with the idea to do this, because I never would have thought of it in a million years.  Their response was basically, “You said it could help brainstorm, so why wouldn’t we try that?”  By the way, that’s some pretty good prompting.  #prouddad

What does this have to do with anything?  At the time, I would have never thought of using ChatGPT to create a game.  I had been stuck in my normal day to day frame and unable to see beyond it.  I hadn’t realized it at the time, but I had limited AI to only certain use cases.  When my kids shared their experience, it was a nudge that I need to make sure that I’m not the one limiting the potential of emerging technology.  Now, instead of asking, “Where does AI fit?” I ask, “How can we use AI to enhance what’s possible?”  The first question assumes there are limited places where AI can be helpful.  The second question assumes that there is always a chance to leverage AI to enhance things.  This second question causes me to lean in with curiosity and a willingness to explore potential.

The challenge: How will you ensure your thinking is free and unlimited?

Have a jolly good day,

Andrew Embry