On the heels of my Marketing Plan for Tech Startups launch during TECH WEEK by a16z, I had the joy of hand-delivering copies to some of the people who shaped my marketing journey: teachers, mentors, and icons whose ideas live inside these pages.
At Stanford University Graduate School of Business, I sat down with Professor Baba Shiv, whose groundbreaking research on the neuroscience of decision-making forever changed how I think about marketing. He taught me that 95% of our decisions are driven by emotion, not logic — a truth that still holds in B2B, even when the stakes involve multi-year contracts and enterprise deals.
Stanford GSB: With Prof. Baba Shiv this week (2025) and the cohort of the Innovative Technology Leader program (2023)
AtGoogle , I met with Jeanine Banks whose leadership at Google Developer X taught me what it truly means to innovate inside a large organization. Working for Jeanine was a career highlight for me and her ability to bootstrap new initiatives and help teams “execute and win like a startup” inspired many of the ideas I share in the book.
Google: With Jeanine Banks this week (2025) and during my Noogler orientation (2018)
And at Y Combinator, I caught up with Pete Koomen, Partner at YC and Co-founder of Optimizely, one of Silicon Valley’s great success stories. His Startup School talk on enterprise sales remains one of my favorites, and key lessons from that lecture made their way into this book.
Wth Pete Koomen at Y CombinatorPete’s quote for the Marketing Plan for Tech Startups
Every stop on this tour felt like a full-circle moment: celebrating the people and ideas that helped build the foundations this book stands on.
Where shall I make the next stop on the book tour?
I’m still on cloud nine after three incredible book launch events during TECH WEEK by a16z. Seeing Marketing Plan for Tech Startups in the hands of its first readers felt surreal.
Across the week, I met founders, marketers, and innovators who share a belief that marketing must be part of product creation from day one.
Each conversation reminded me why I wrote this book: to help builders turn innovation into adoption, and adoption into revenue.
Book by the Numbers
The Marketing Plan for Tech Startups is your trusted companion for moving from product idea to adoption and revenue with confidence.
Startup founders will gain clarity on priorities and next steps when it matters most.
Marketing executives will get a timely gut check to align teams and accelerate growth.
#1 New Releases in Direct Marketing on Amazon
✅ #1 New Releases in Direct Marketing
✅ 3 timeless marketing axioms
✅ 3 book launch events at SF TechWeek
✅ 18 elements of a complete marketing plan
✅ 30+ expert contributors from global tech leaders
✅ 60+ brands and products featured
✅ 300+ startup founders and marketing leaders RSVP’d for the launch
3rd Event 🚀🚀🚀: The Official Book Launch
📅 Friday, Oct 10 | Official Book Launch at Contentful’s SoMA office
300+ Startup Founders and Marketing Leaders RSVP’ed to the Official Book Launch Party
As a marketer, I believe in drinking my own champagne: staying hands-on with the product, engaging directly with customers, and implementing what I advocate for in the book. That’s why I chose to launch Marketing Plan for Tech Startups during the Tech Week conference, attended by my ideal readers: founders and marketing leaders passionate about innovation.
In the book’s chapter on positioning, I quote Brian Chesky:
“Build something 100 people love, not something 1 million people kind of like.”
That principle guided my launch: starting small and creating genuine connections with early adopters before scaling. And what better place to engage directly with readers than the largest distributed conference in Tech?
Still, I didn’t expect such an overwhelming response. Over 300 Tech Week attendees signed up for the launch. 🫢
Even after years of designing and hosting events for major tech companies, this one felt different.
1000+ Women Leaders RSVP’ed to “Women in AI” Event at Chief
AI is lowering the barriers to innovation, and women are leading the charge. At Chief, I had the honor of joining Joyce Chen, Monisha Somji, Elaine Wah, and Susan Chu to discuss how women are shaping the next wave of AI.
As someone who studied computer science and began her career as a software engineer, I was often the only woman in the room. Seeing AI open the door for more voices — especially women’s voices — feels deeply personal. The event at Chief made that shift visible: a packed room and 900+ people on the waitlist.
The AI era does not shrink marketing’s role. It expands it.
For too long, marketing has been narrowed to campaigns and promotion. But with AI, we now have the tools to reclaim the full scope across product, price, place, and promotion.
There’s never been a better time to be a marketer than in the era of AI.
1st Event 🚀: Funded Female Founders
📅 Monday, Oct 6 | Startup Grind: Female Founders backed by YC
The Book Debut: Funded Female Founders Event with Startup Grind
On the first day of SF TechWeek, at a female-founders event, a VC said she’s no longer investing based on technical depth alone. She’s investing in founders with a clear go-to-market plan.
That insight resonated deeply, and it’s exactly why I launched 𝐌𝐚𝐫𝐤𝐞𝐭𝐢𝐧𝐠 𝐏𝐥𝐚𝐧 𝐟𝐨𝐫 𝐓𝐞𝐜𝐡 𝐒𝐭𝐚𝐫𝐭𝐮𝐩𝐬 during Tech Week. When the barriers to building lower, your go-to-market plan is what turns great tech into a great business.
With Gratitude
Thank you to everyone who showed up, shared ideas, and helped this book lift off.
Inspired by Priyanka Vergadia’s demo showing how she built a full-stack app in minutes with GitHub Spark, I gave it a try. Spark is GitHub’s new AI-powered app builder that runs entirely in your browser. No setup. No config. No need to remember Java classpath from my mobile and web app developer days. 😉
Just describe what you want, and Spark builds it end-to-end: front-end, database, authentication, etc. As always, Priyanka did an awesome job walking her YouTube channel viewers through all the steps of using GitHub Spark to go from zero-to-app, so I thought: why not?
The PRD aka my wish list for a book reader
I mostly wanted three things:
A two-page view so if you read on a big screen it feels like an actual book in front of you.
A search function so you can instantly jump to “positioning,” “pricing,” or “Anthropic case study.”
Bookmarks and notes, so readers can mark sections and write down thoughts as they read (my paperback margins are always full of notes and post-its 😉)
Three features I dreamed up, let’s see what I got.
GitHub Spark
How GitHub Spark turned my PRD into a working e-reader
I typed my requirements in natural language, hit submit, and Spark went into “think mode.”
A few minutes later, I had a working prototype: two-page display.
A couple of hours later (and with a few vibe-coding-hacks I’ll detail below) I added keyword search and a bookmark system. Here’s the finished product:
My e-reader vibe-coded in an afternoon
My e-reader vibe-coded in an afternoon looks very promising but is not quote ready to ship just yet. Here’s why:
Lessons learned from vibe-coding an e-reader in GitHub Spark
First, while Spark gave me the basic app scaffolding quickly, it struggled to render a PDF heavy with graphics. Sometimes it showed only text, other times it spit out binary data.
Spark’s default PDF handling just wasn’t built for a manuscript like mine. My book isn’t a typical wall of text. I wrote it in Google WorkspaceSlides to make it as much a tool as a book, packed with frameworks, diagrams, and visuals that startup founders and marketers can apply right away. The format was deliberate: keep the text lean, rely on visuals, and use slides as a constraint so every word carries weight.
I knew from a previous vibe-coding session that v0 by Vercel could handle a heavyweight manuscript like mine, so I thought: why not ask Vercel how it did it? The answer was pdfjs-dist, the distributable version of Mozilla’s PDF.js, which renders PDFs natively in the browser without plugins. I plugged it into Spark and—yay—I was unblocked!
Second, as I layered on more prompts and features, I learned that Spark projects can hit limits and stop accepting prompts.
The first prototype was quick and easy, refining it took patience… and some help from ChatGPT. When Spark stopped accepting prompts, I pulled down the GitHub ZIP, then used ChatGPT to reverse-engineer Spark’s app architecture and rebuild the project with more detailed instructions.
This experience pretty much sums up today’s vibe-coding scene: vibe-hackers are out of the box thinkers who juggle multiple tools; when one doesn’t do what you need, you pick up another.
My final lesson: vibe-coding is a lightning-fast way to prototype and experiment but it still takes time to create a production-grade app ready to be shared with others. That’s why for now, I’m only sharing screenshots.
Just like with my “Slide Tools” hackathon two weeks ago, I was reminded of the real promise of AI-driven coding:
The future of software with AI: everyone can be a creator.
The next generation of apps — whether e-readers or enterprise apps — will be powered by AI, built faster than ever, and customizable to fit customer needs with precision.
And some of those apps will be built by marketers.
Marketers as vibe-coders
“Vibe Marketers” are already starting to appear on job boards:
“We’re looking for a Vibe Growth Marketing Manager who is a builder who prototypes and ships faster than most teams can spec a brief. You’ll use AI tools, LLMs, no-code/low-code platforms, and smart automation to rapidly unlock new growth channels, improve operational efficiency, and experiment with new marketing ideas end-to-end.”
It’s clear that vibe-coding is becoming essential for speed and efficiency in marketing workflows.
But why stop at workflows? What if marketers could also be the first prototypers of new product ideas?
Marketers as product prototypers
Marketers are already customer advocates and trend spotters. Vibe-coding tools now give them the ability to turn insights directly into working prototypes, bridging the gap between customer voice and product innovation.
With vibe-coding, marketers can also extend existing products with new features requested by their customers, as I demonstrated in my “Slide Tools” hackathon.
My custom slide tools I added to Google Slides
A sneak peek into my book’s vision
Elevating marketers into co-creators of product is central to my book’s vision. My goal is to restore marketing to Kotler’s full “4 Ps” (product, price, place, promotion), rather than the narrow “1 P” of promotion it’s often reduced to. Vibe-coding tools may be the superpower that helps marketers reclaim all four.
If you’re a startup founder or marketing leader, my upcoming book Marketing Plan for Tech Startups project distills lessons from Fortune 500 companies and startups into practical frameworks to break through the noise and turn innovation into revenue.
I’m also thrilled to share that the one and only Priyanka Vergadia is among its distinguished contributors! 😀
This weekend, I pulled off my own hackathon. The challenge? Cleaning up 200+ Google Slides of my upcoming book: Marketing Plan for Tech Startups.
Why so many edits?
After a year of experiments and contributions from several collaborators, each with their own style, the deck had turned into a Frankenstein: fonts all over the place, inconsistent sizes, text boxes scattered. Original thinkers are not known to stick to templates. 🤪
Why did I write a book in Google Slides?
Because I wanted to create a tool as much as a book, a resource startup founders and marketers can apply right away. My rationale: keep text lean, rely on visuals, and use slides as a constraint so every word carries weight.
As the book launch at TECH WEEK by a16z in San Francisco this October approaches, the thought of unifying it all was daunting. Manually cleaning 200+ slides would take days, and still never be perfectly consistent.
So I turned to AI. It thrives on repetitive and grueling work, the kind humans struggle to do well. I just needed to get it inside Google Slides.
How to vibe-code away the pain of manual slide edits
First, I accessed App Script under “Extensions” in Google Workspace Slides:
Accessing Google Slides API via Apps Script
Second, I used Windsurf to vibe-code the features I wanted:
From a single prompt…
… I got ready to use code and a deployment guide in seconds.
Third, I pasted the code into Apps Script…
Apps Script in Google Slides
… and just like that, I got the first tool. Quick test… It works, yay!
I continued with more prompts to build functions like updating colors to a specific shade of black or changing fonts to Lato.
Soon enough, I had my own full set of “Slide Tools” to tame 200+ slides. ⬇️
My custom set of “Slide Tools”
Maybe in addition to publishing a book, I should start a side hustle selling Google Slides automations. After all, I have already got one very polished deck to prove it works. 😉
One more thing
Like every good hackathon, this one came with a “one more thing.” It reminded me of the real power of vibe coding: when products open APIs, anyone can go beyond the defaults, shape tools their own way, and turn a generic product into something personal.
The future of software with AI: everyone can be a co-creator.
And with vibe coding democratizing access to computer programming, that future is close and attainable.
Everyone can make a popular tool even more useful.
As a marketer, I’m excited about the future of software. I’ve spent my career helping emerging technologies find their market and convert innovation into sales. That same spirit is what I poured into my upcoming book. Marketing Plan for Tech Startups is meant to be a practical guide that helps founders and innovators do the same.
And just like a product with open APIs, this book is built to be extended. If you’d like to add your perspective or contribute to future editions, I’d love to hear from you.
Please comment below or send me a DM, and I’ll be in touch!
In mathematics, axioms are self-evident principles. They’re the foundation on which more complex theorems are built. Take calculus, my favorite branch of math. It starts with a handful of assumptions: space and time are continuous, limits exist, and change can be quantified. If you know how fast something is changing, you can figure out how much it’s changed — and vice versa. From there, you unlock entire worlds: rates of growth, accumulation of value, and optimization. Sounds a lot like marketing, doesn’t it? 😉
In marketing, axioms serve the same role. They help you recalibrate when markets shift, competitors surprise you, or new technologies (like gen AI) change the game.
The Three Axioms of Marketing
Over the years, I’ve returned to these three core axioms again and again, applying them through every tech wave from mobile to AI while working across startups and Fortune 100 companies in both Europe and Silicon Valley.
Axiom 1 – Scientific precision
For every product there exists exactly one clear positioning statement that links its unique capability to a concrete customer need.
The most effective marketers approach their product’s positioning statement as a discovery process, not guesswork. Positioning requires rigorous market analysis, a deep understanding of buyer personas, and an honest comparison to alternatives. When done correctly, positioning defines the North Star for the business and keeps Marketing, Sales and Product in sync and united.
To see scientific precision in action, let’s look at a fresh positioning example from the autonomous ride-hailing market. Waymo ’s trajectory shows how clear market definition and targeted messaging can carve out a profitable niche even in a highly competitive space.
Example of a positioning statement inspired by Waymo (San Francisco, August 2025)
In 2025, Business Insider and Reddit discussions revealed that Waymo rides cost about $5–6 more than Uber or Lyft, yet many riders valued the extra comfort, reliability, and privacy enough to pay the premium.
Without precision here, marketing risks scattering its efforts, diluting the message, and missing the target entirely.
Axiom 2 – Emotional storytelling
In every purchasing decision, emotion precedes reason. Marketing must first evoke feeling so that logic can later validate the choice.
Baba Shiv, a revered Marketing Professor at Stanford University Graduate School of Business, once stated, “Nearly 95% of our decisions in life are rooted in emotion—not logic.” You might wonder if this applies to business-to-business transactions, like enterprise software sales. The answer is a definite “Yes!
In the mid-2000s, Cisco ran its “Self-Defending Network” campaign to position its IT security solutions as proactive, intelligent defenders of an organization’s assets. Rather than focusing solely on firewalls, intrusion prevention, or encryption standards, Cisco told stories of real people whose work and reputations were safeguarded because the network stopped threats before they could cause harm.
Cisco’s The Power of The Network Campaign
If you’re an enterprise sales executive, you probably realize that customers often decide in favor or against your product early in the sales cycle. What follows is essentially an opportunity for the customer to validate their initial decision. This phase involves testing your product, examining ROI calculators, or seeking peer reviews to gain confidence in the choice they’ve already made.
In summary, customers make decisions emotionally and then rationalize them.
Axiom 3 – Relentless consistency
Repeated delivery of a coherent message compounds its impact over time, while inconsistency diminishes effectiveness toward zero.
Consistency turns your positioning into a memory imprint. Each content interaction, ad view, and social media conversation must reinforce the same promise.
This is about disciplined alignment: sales decks matching web copy, product announcements echoing the same value proposition, and customer success stories reinforcing the same themes.
Over months and years, consistency builds trust and recognition in ways no single campaign can match. Break the thread too often, and you start again from zero.
Vanta ’s years-long commitment to podcast advertising shows how repetition in the right channels compounds over time. Starting in niche security shows like This Week in Startups and CISO-focused podcasts, and expanding to mainstream business podcasts such as Acquired, The Daily, (my personal favorite: The AI Daily Brief by Nathaniel Whittemore), and The Diary of a CEO, they’ve kept the core message intact. The result: thousands of sales conversations where prospects bring up hearing about Vanta “all the time.” That’s consistency turning positioning into muscle memory.
In their CMO Scott Holden‘s own words: “Vanta gets a lot of attention for our billboards, but podcasts are a massive lever for us too. We’ve been advertising in them since 2018. (…) All those demo requests that sales loves begin higher up the funnel.”
Vanta advertised for years on podcasts for their target audience: CISOs who’s spend 100s of thousands of dollars on their security software
From Calculus to Product Launches
These axioms have guided me across decades of tech shifts:
2000s – Mobile Internet. At Nokia, I built software tapping directly into telecom APIs, turning raw network data into useful services I then helped take to market for clients like Vodafone, Telenor and TIM.
2010s – Cloud Computing. At Cisco and Riverbed Technology I distilled the value of cloud computing before it became the backbone of digital transformations at Fortune 500 enterprises.
2020s – Data Analytics and Gen AI. At Google, I crafted portfolio-wide narratives for data and AI products and enabled 13,000 sellers to tell those stories effectively.
2025 – Edge and Agentic AI. At Synadia a cloud-native startup, I repositioned middleware into a full-stack edge AI platform: a key enabler for agentic AI at the edge in retail, automotive and manufacturing.
The Marketing Axioms in the AI Era
Marketing a category-defining product (gen AI, self-driving cars, or the next leap in biotech) is like solving a math problem no one has worked out before.
There’s no answer key. But the path forward still starts with fundamentals.
In AI-powered marketing today, that might mean taking a single high-performing message and using AI to instantly create dozens of localized, role-specific, or vertical-tailored variations. The goal is to preserve the original emotional heartbeat and positioning clarity.
Final thought: In both math and marketing, axioms don’t give you the full solution. They give you the foundation that makes finding it possible.
Turning Innovation Into Revenue
If you enjoyed this post and would like to continue the journey into the three axioms of marketing, I have exciting news: 10 years after publishing my Marketing Plan for Tech Startups template (which reached more than 100k readers) I’m turning it into a full book:
With the latest AI darling, DeepSeek AI, wiping billions off the market value of US tech giants just yesterday, 2025 is already shaping up to be a fascinating year for AI. The rapid evolution of AI, its promises, pitfalls, and shifting priorities, sets the stage for a year full of disruption. Here are my predictions for what’s IN and what’s OUT in AI for 2025:
AI Tech Stack: OUT with Training Obsession, IN with Inference*
The obsession with training massive models is OUT. What’s IN? Ruthlessly efficient inference. In 2025, if you’re not optimizing for inference, you’re already behind. Here’s why.
The cost of achieving OpenAIo1 level intelligence fell 27x in just the last 3 months, as my Google Cloud colleague Antonio Gulli observed – impressive price-performance improvement.
The recent DeepSeek AI breakthrough proves this point perfectly. Their R1 model (trained for just $5.6 million, a fraction OpenAI’s rumored 500 million budget for its o1 model), achieves feature-parity and even outperforms major competitors in key benchmarks:
We clearly figured out how to make LLM training more effective and cost efficient. Time to reap the benefits and use the models for inference.
*We will still be enhancing LLMs’ capabilities, developing smaller, purpose-built models and re-training them with new data-sets.
AI Architecture: OUT with Cloud-First, IN with Edge-First**
The pioneers in the most AI-advanced industries like Manufacturing have exposed the limitations of cloud-first AI approaches. According to Gartner, 27% of manufacturing enterprises have already deployed edge computing, and 64% plan to have it deployed by the end of 2027. Why the rush to edge-first AI architectures?
In industrial applications, especially those requiring real-time control and automation, latency requirements as low as 1-10 milliseconds demand a fundamental rethinking of distributed AI system design. At these speeds, edge-to-cloud roundtrips are impractical; systems must operate as edge-native, with processing and decision-making happening locally at the edge.
One of Synadia‘s most innovative customers, Intelecy, a No-Code AI platform that helps industrial companies optimize factory and plant processes with real-time machine learning insights, perfectly illustrates this paradigm shift. Their initial cloud-first approach had processing delays of 15-30 minutes. By redesigning their AI architecture for the edge, they achieved less than one-second round-trip latencies. This dramatic improvement enabled real-world applications like automated temperature control in dairy production, where ML models can provide real-time insights for process optimization.
Processing data where it is generated isn’t just more efficient—it’s becoming a competitive necessity for every industry. Gartner predicts that by 2029, 50% of enterprises will use edge computing, up from just 20% in 2024.
**The cloud’s role in AI isn’t disappearing (of course), but the default is shifting rapidly towards edge-first thinking.
AI Impact: OUT with What-If, IN with What-Now***
Focusing on model capabilities is OUT. What’s IN? Solving real business problems. The most compelling AI stories in 2025 won’t mention model architecture. Instead, they’ll focus on measurable business impact.
Intelecy’s Chief Security Officer 🔐 Jonathan Camp explains how AI can help ensure quality in manufacturing: “A dairy can use a machine learning forecast model to set temperature control systems using the real-time predicted state of the cheese production process.The process engineering team can use Intelecy insights to identify trends and then automate temperature adjustments on a vat of yogurt to ensure quality and output are not compromised.”
The shift is clear: success is no longer measured in model capabilities, but in hard metrics like revenue gained, costs saved, and efficiency improved. The question isn’t “What can AI do?” but “What value did it deliver this quarter?”
***As an innovation-obsessed marketer, I’l never give up on “what-if” dreams but “what-now” is the state of AI in 2025.
The Elephant in the Room: Can gen AI be trusted?
We’ve solved training costs. We’ve started to crack real-time processing. Now, the focus shifts to trust: Can AI deliver consistent, reliable, and verifiable results at scale?
For example, try to ask 3x gen AI bots this prompt 3x, and see for yourself:
Name top 3 ski resorts in Europe by the total length of ski runs that are truly interconnected (no bus transfers)
We’re entering the era of agentic AI where AI-made decisions will be automatically implemented by chains if AI-functions. Are we ready?
As a technology marketer, I’ve witnessed the Gen AI revolution from its inception. In early 2023, I crafted an enterprise narrative for Google Cloud, helping our global salesforce inspire customers to adopt this technology. I’ve seen AI evolve significantly: from chatbots writing silly poems to answering medical questions and guiding students in physics. Today, AI can understand, reason, and create across various inputs like text, images, audio, and video. I’m excited about AI’s potential to accelerate marketing innovation.
Now, as the VP of Marketing at Synadia, the startup on a mission to connect the world, I’ve observed even more. At a recent webinar for 50+ portfolio companies at Forgepoint Capital, I shared these insights, which I’m highlighting in this week’s newsletter. Thank you Tanya Loh for the opportunity!
As a technology marketer, I’ve witnessed the Gen AI revolution from its inception. In early 2023, I crafted an enterprise narrative for Google Cloud, helping our global salesforce inspire customers to adopt this technology. I’ve seen AI evolve significantly: from chatbots writing silly poems to answering medical questions and guiding students in physics. Today, AI can understand, reason, and create across various inputs like text, images, audio, and video. I’m excited about AI’s potential to accelerate marketing innovation.
18 months ago, I visualized gen AI as my alter ego, a game changer amplifying our strengths and overcoming our weaknesses. I hoped that for creative and curious people who tend to procrastinate and get bored easily, Gen AI would make life easier by handling the repetitive and mundane tasks we dislike.
Gen AI amplifies many our strengths and overcomes our weaknesses
Today, I can say that my prediction turned out to be true. I see three main buckets where Gen AI helps us in marketing:
✅ Expand your expertise
✅ Jump-start your creativity
✅ Offload your marketing tasks
I’ll illustrate how I’ve used Gen AI in marketing through stories with Google and Synadia.
Gen AI quickly expands our expertise and teaches us new skills
First, let me tell you how Gen AI helped me hit the ground running from Day 1 at Synadia and how it continues to my invaluable sidekick. I created a custom GPT to teach me about Synadia’s product portfolio. I trained it using public documentation. I knew the learning curve on the product built by the world’s brightest distributed systems engineers would be steep. Asking my personal GPT questions promptly got me up to speed.
Here’s another example. Imagine hearing a new acronym during a meeting with your engineers. Instead of interrupting the flow and making everyone wait while someone explains it, my Synadia bot, trained on github.com/synadia-io, immediately clarifies it for me. This way, we can stay focused on discussing our vision for building the ultimate platform for distributed applications without sidetracking the meeting.
My other story happened at Google. You may have seen ‘The Tale of a Model Gardener’ video, a humorous cartoon about Gen AI’s ability to help enterprises achieve their business goals. Gen AI accelerated the creative process for this video we launched ahead of #GoogleCloudNEXT in August 2023. Here’s how.
The idea hit me when cycling between San Francisco’s Marina District and Financial District to Google’s office. I wondered: “How do I explain concepts in AI, such as augmentation and prompt engineering, in fun, approachable ways?” The idea of a cartoon video surfaced but how to start, having never written a video script nor cartoon?
With just 30 minutes until my next meeting, I instructed Bard (now #GoogleGemini). “Hey, Gemini, write me a movie script about X.” I added three sentences with my idea. In minutes, I got a fully developed movie script, beautifully formatted by scenes. I perfected things with small additional prompts. Google #Vertex AI, an end-to-end ML platform, helped me generate images. I stitched together a script draft with some images and sent it to my creative colleagues–all within 30 minutes–and they really liked the idea of a cartoon. Though they had different ideas about what the cartoon ought to explain and how it ought to look; the concept landed from my Gemini experiment.
This triumph really shows what Gen AI brings us as marketers. I got a solid movie script within minutes, with no prior experience in building such things. I didn’t waste the idea, which became an important, valuable deliverable for the company.
Have you seen this fairy tale 🐇🥕🥧 about gen AI?
Gen AI sparks our creativity
I want to expand on how I use Gen AI to jumpstart my creativity, especially when short on time. As you can imagine, only a few weeks into the Synadia role, I’m working on our positioning and messaging. I brainstorm with my small but mighty marketing team, my product and engineering team, and my founder and CEO. I also brainstorm with Gen AI, especially when everyone’s busy. (My bot is always available.)
As the proud granddaughter of a professor of physics and a prolific dressmaker who whipped up gorgeous fashion from his patterns, I’ve long loved prototyping and testing my ideas. My mind works best when reacting to prototypes vs thinking about them. Prior to Gen AI, we had to write or sketch out our prototypes. Gen AI requires a simple prompt for a full document which can spark more ideas and creativity in ourselves and others. (We saw this with my design team at Google and ‘The Tale of a Modern Gardener’ AI-generated cartoon script.) For that same reason product demos are worth 1000 slides. Show, don’t tell.
Have you read our
Gen AI offloads small tasks
Gen AI also can offload small, repetitive, mundane tasks to free us up for more strategic thinking and exciting tasks. At Synadia and Google, Gen AI has helped me:
Jump-start projects. A custom prompt to generate case studies from our many great customer stories captured in blog posts and videos scaled our small team’s output.
Generate images. The early images for my first cartoon script for ‘The Tale of a Model Gardener’, weren’t perfect, but brought the narrative to life.
Edit content and minor things/ideas. While preparing my creative idea for the design team, I lacked the time for editing parallel construction in my lists or capturing typos. My bot took care of that so I could focus on creativity. Writing uses the creative part of our brain; editing uses the analytical. Mixing the two puts the breaks on the creative process so I like to offload the latter to my bot.
Gen AI helps us feel more experimental
Gen AI never lets an idea go to waste. When you’re pressed for time, quickly producing a first prototype helps your colleagues provide feedback faster. While you might discard that initial version, it speeds up the journey to the final product.
If a picture is worth a 1000 words, a prototype is worth a 1000 thoughts
Sometimes, a colleague may take your prototype in a completely new direction. I find this process empowering and encouraging. Each strong reaction, even a negative one, means I’m one step closer to the ideal solution.
Innovation thrives on collaboration and diversity, not egos. Gen AI helps create an environment where ideas evolve and improve through teamwork, making our solutions stronger and more innovative.
The drawbacks and obstacles with Gen AI
Gen AI has some drawbacks. A few I’ve encountered include:
Ubiquitous language. The bar for quality content has never been higher and savvy readers detect and tune out AI-generated if repetitive, generic, and vague. We’ve seen content overload for close to two years now. Cutting through that noise requires high-quality content.
Flawed responses. Use AI responsibly and not verbatim. AI bots are not deterministic. Our bot’s responses may be quick, but sometimes contain significant errors in reasoning. I wrote about this problem in my post on how I turned a daunting 150+ page-long voter pamphlet into a handy cheat sheet for the San Francisco elections. I prompted: “Summarize all the propositions on the March 5th 2024 SF election ballot with their top arguments for and against in a 3-column table using the voter pamphlet as the data source.” The bot’s quick response impressed me. But I found reasoning errors and one argument was entirely made up by the bot. Always check your results.
No slide fixes! Gen AI will not do what we dream of (yet): unify our fonts and texts in our slide decks 😂.
Change in Gen AI is unprecedented. I’ve seen nothing like this growth in my 15+ years in tech. The new features from OpenAI, Google or Anthropic are just the tip of AI innovation. Many startups work towards perfecting Gen AI as well. In the meantime, discovering gen AI feels amazing and I wonder how we lived without it.
Looking ahead
Despite all that Gen AI brings us as marketers, it cannot compete with human storytellers. Gen AI does not substitute well-written, well-narrated customer stories. Even OpenAl looks for interesting use cases of their products exploring new features in unexpected ways. So let’s provide them. A recent LinkedIn post on me using ChatGPT and a Peloton app to rediscover German became one OpenAI reposted, which sparked a wonderful conversation on learning new languages with Gen AI, all from a lighthearted, personal story connecting with technology, efficiency, and learning.
The previous edition of this newsletter reposted by Open AI
This moment reminds me: All Tech brands, even the Silicon Valley hottest companies like OpenAI, seek interesting stories on how we use their products in exciting, unexpected ways to start a community conversation.
For now, Gen AI cannot do that nor can it replace a great writer or story. That’s our opportunity and another way we can best partner with Gen AI as marketers and as storytellers.
➡ How do YOU use Gen AI in Marketing? Share your thoughts in the comments! ✍👇
➡ Need a hand getting started? Shoot me a message! ✍ ✉
“The Magic of Generative AI” is still my favorite talk I’ve ever given, hands down. I loved collaborating with Google’s top AI minds on the story and the visuals, building demos that showed how Vertex AI helps marketers like me, and connecting with fellow AI enthusiasts in awesome places like LA and Rome.
But the best part was diving deep into how large language models (LLMs) actually work, reading those mind-bending research papers, and piecing together the “magic” they create. Preparing this talk was like living Google’s innovation mantra: stay curious, experiment, build something useful.
In this newsletter, I’m sharing my reflections on the magic of Gen AI and how Google’s unique innovation culture was key to making these incredible tools a reality.
Curiosity, experimentation, and application: This is the heart of how Google is driving the generative AI revolution. It’s the same formula behind some of our biggest breakthroughs, like Google Search, Translate, and Vertex AI.
Here’s how it works:
Curiosity: This is where it all starts – that burning question of “what if?” or “why not?” Curiosity is what drives us to explore the unknown and challenge the status quo.
Experimentation: Curiosity without action is just daydreaming. Experimentation is where we get our hands dirty, trying new things, making mistakes, and learning from them. It’s the messy but essential part of the process.
Application: The ultimate goal of innovation is to create something that makes a real difference in the world. Application takes those wild ideas and experiments and turns them into practical solutions that people can use and benefit from.
This isn’t just a theory; it’s the blueprint behind Google’s most groundbreaking AI tools.
Embeddings in Google Search: Grasp query intent beyond exact keywords
In 2013, Google researchers authored the seminal paper “Efficient Estimation of Word Representations in Vector Space“. This paper unveiled a revolutionary method for creating Word Embeddings, mathematical representations of words capturing both their meaning (semantics) and relationships (semantic similarity). Here’s the Google’s innovation formula in action:
Curiosity: Dissatisfied with existing word organizational methods, such as dictionaries ordering words by lexicographical order, researchers were curious if a better approach could capture word semantics and organize them by semantic meaning.
Experimentation: They explored various neural network types, training objectives and relationship representations. Through experimentation, they discovered how to automatically create a word embedding. A name to be remembered, an embedding is a mathematical representation for each word that captures their semantic meaning in the form of a vector of 768 numbers.
Application: Way before it was applied in gen AI, word embeddings found a magical application in semantic search, enabling Google Search 🔍 to grasp query intent beyond exact keywords. For example, a search for “cars that are good on gas” now returns results for fuel-efficient cars, even if the word “gas” doesn’t appear in the options returned.
Source: “The Magic of Generative AI” talk, Google Gen AI Live and Labs event series
Transformer in Google Translate: More accurate translations
In 2017, Google researchers presented “Attention is All You Need” introducing the Transformer architecture, built on decision-making and attention-span concepts. It empowers the language models to understand context and relationships within word sequences. Curiosity, experimentation and application were again vital:
Curiosity: In the search to improve the quality of language translation, researchers sought ways to model relationships among words in a sentence.
Experimentation: They experimented with various mechanisms, relationship representations and training methods, discovering that much could be extracted by simply paying attention to the relationship between each word and every other word in a sentence. They discovered that these interdependencies could be achieved through parallel computations, which accelerated time to result, and found that representations through embeddings could capture long-range dependencies between words in fluent, grammatically correct text. Voilà! The Transformer architecture was born, introducing a huge breakthrough in science.
Application: The transformer revolutionized Google Translate 🌐. The Transformer’s attention mechanisms are excellent at understanding the relationships between words in a sentence, leading to more accurate translations.
Source: Transformers, FT,
Let’s see this in action by translating this sentence from English to Italian: “The cat didn’t cross the street because it was too wide.
Source: “The Magic of Generative AI” talk, Google Gen AI Live and Labs event series
Gen AI in Enterprise Search: New way of working
Fast forward to 2023, Google Cloud researchers set to simultaneously tackle two common challenges for many organizations:
How to organize enterprises information scattered across many internal systems
How to make this information accessible and useful for enterprises, and seamlessly available and actionable in applications such as customer service bots, document summarizations or as part of steps in automated workflows.
Not surprisingly, Google Cloud researchers followed the proven innovation framework:
Curiosity: While Google Search was designed to scale to organize the world’s information, researchers started exploring whether the technology could be descaled and made available to enterprises to organize their information in a way that could be easily accessible and useful to them, and only to them.
Experimentation: Intrigued by the potential to bring together several cutting-edge technologies, researchers used the ability to crawl web-sites to discover content on internal websites and structured content, and Optical Character Recognition (OCR) to discover content from all sorts of semi-structured and unstructured documents, creating a wealth of knowledge about the enterprise. The researchers then used embeddings to extract and organize the semantic meaning of all of this data. Once enterprise’s data has been semantically organized in embeddings, the full power of Generative AI can be applied to it and leveraged across the Vertex AI platform.
Application: First launched in March 2023, Google Cloud Vertex AI Search 🔍 quickly became “the killer enterprise app”. A killer application, often abbreviated as killer app, is a software application that is so necessary or desirable that it proves the core value of some larger technology, such as its video game console, software platform, or in this case of gen AI in the enterprise context. Killer apps are the pinnacle of innovation: well-designed, easy to use and solving a real problem for users. Enterprise Search is the killer enterprise app as it unlocks unprecedented levels of productivity and efficiency.
These transformative breakthroughs exemplify Google’s dedication to AI innovation, with continued explorations on the horizon.
With the entire text of Les Misérables in the prompt (1382 pages), Gemini 1.5 Pro locates a famous scene from a hand-drawn sketch
Technical writing is one of my favorite reads. It’s clear, succinct, and informative. DeepMind’s technical paper on Gemini 1.5 epitomizes all I love about technical writing. Read the abstract for a glimpse into the groundbreaking advancements encapsulated in Gemini 1.5 Pro; it’s a masterclass in effective communications. We learn how to deliver maximum insight with minimum word count.
In just 177 words, my DeepMind colleagues articulate:
#ProductCapabilities: “a highly compute-efficient multimodal* mixture-of-experts model** capable of recalling and reasoning*** over fine-grained information from millions of tokens of context”
#UniqueSellingPoint: “near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 2.1 (200k) and GPT-4 Turbo (128k)”
#UseCases: “surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person learning from the same content”
Gemini 1.5 Pro is able to translate from English to Kalamang with similar quality to a human
The science of writing succinctly
In a few words, the paper abstract communicates the model’s superior performance, its leap over existing benchmarks, and its novel capabilities. It sparks curiosity about the future potentials of large language models—a true testament of powerful, precise, impactful technical communication.
How did the Gemini 1.5 paper authors achieve this mastery? By following the guiding principles of Brevity (saying more with fewer words) that my friend and thought partner D G McCullough and I recently summarized as: “Trust, Commit, Distill”:
#Trust means believing in the power of your message without over-explaining nor adding unnecessary details. Trust empowers the communicator to eliminate redundancy, focusing on what’s truly important. The Gemini 1.5 paper authors trust their curious readers to look up terms that may be new to them. On first read, I had to look up “mixture-of-experts” but the context I’ve had from my 2 years of working with data and AI allowed me to “guesstimate” its meaning before getting the proper definition.
#Commit refers to sticking with the essentials of your message, understanding your message’s objective, and resisting tangents or unnecessary explanations diluting the message’s impact. (Which requires discipline!)
#Distill requires breaking down your message to full potency. Like distilling a liquid to increase its purity, we must strip away the non-essential until the most impactful, clear, and concise message remains. Every word and idea then serves a purpose–and voila! Your message becomes clearer, and more memorable.
The art of replacing 100s of words with a single image
The saying “A picture is worth a thousand words” truly shines in technical communication. A single, well-chosen image can articulate complex ideas with more efficiency and impact than verbose descriptions. The Gemini 1.5 paper’s authors skillfully weave in visual elements, showcasing a deep grasp of conciseness. This approach not only makes complex AI and machine learning concepts approachable and captivating but also boosts understanding and enhances the reader’s journey. It demonstrates that when it comes to sharing the latest scientific breakthroughs, visual simplicity can convey a wealth of information.
With the entire text of Les Misérables in the prompt (1382 pages), Gemini 1.5 Pro locates a famous scene from a hand-drawn sketch
Simplify complexity with brevity
In our rapid world, where attention is a rare commodity and people often skim rather than read, the skill of conveying ideas briefly and through visual storytelling stands out as a significant edge. Simplifying complex concepts into engaging visuals and concise explanations can mean the difference between being noticed or ignored.
Richard Feynman, the celebrated physicist, Nobel laureate, and cherished educator, famously stated, “If you can’t explain it simply, you don’t understand it well enough.”
Richard Feynman quotes
Feynman’s approach isn’t just about words; it involves using visuals and images to make intricate ideas more approachable. After all, the deepest insights are usually the easiest to understand when we apply brevity to break down complexity.
DeepMind’s Gemini 1.5 technical paper exemplifies this principle perfectly. It’s essential reading for anyone intrigued by general AI (especially with #GoogleCloud#NEXT24 on the horizon), and it’s an exemplary model for those dedicated to honing their communication skills.
“In this report, we present the latest model of the Gemini family, Gemini 1.5 Pro, a highly compute-efficient multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks. Studying the limits of Gemini 1.5 Pro’s long-context ability, we find continued improvement in next-token prediction and near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 2.1 (200k) and GPT-4 Turbo (128k). Finally, we highlight surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person learning from the same content.” https://storage.googleapis.com/deepmindmedia/gemini/gemini_v1_5_report.pdf
Define the key terms used in the abstract
* #Multimodality: Gemini is natively multimodal. Prior to Gemini, AI models were first trained on a single modality, such as text, or image, and then corresponding embeddings were concatenated. For example, the embedding of an image would be generated by an AI model trained on images, the embedding of the text describing the image would be generated by an AI model trained on texts, and then the two embeddings would be concatenated to represent the image and its transcript. Instead, the Gemini family of models was trained on content that is inherently multimodal such as text, images, videos, code, and audio. Imagine being able to ask a question about a picture, or generate a poem inspired by a song – that’s the power of Gemini.
** #Mixture-of-Experts Model: At the core of Gemini’s groundbreaking capabilities lies its innovative mixture-of-experts model architecture. Unlike traditional neural networks that route all inputs through a uniform set of parameters, the mixture-of-experts model consists of numerous specialized sub-networks, each adept at handling different types of information or tasks—these are the “experts.” Upon receiving an input, a gating mechanism intelligently directs the input to the most relevant experts. This selective routing allows the model to leverage specific expertise for different aspects of the input, akin to consulting specialized departments within a larger organization for their unique insights. For Gemini, this means an unparalleled ability to process and integrate a vast array of multimodal data—whether it’s textual, visual, auditory, or code-based—by dynamically engaging the most suitable experts for each modality. The result is a model that not only excels in its depth and breadth of understanding but also in computational efficiency, as it can focus its processing power where it matters most, without overburdening the system with irrelevant data processing. This approach revolutionizes how AI models handle complex, multimodal inputs, enabling more nuanced interpretations and creative outputs than ever before.
*** #Reasoning: Gemini goes beyond simple pattern recognition. It utilizes a novel architecture called “uncertainty-routed chain-of-thought” to reason and understand complex relationships within and across modalities. This enables it to answer open-ended questions, solve problems, and generate creative outputs that are not just factually accurate but also logically coherent.
Did you know that “T” in Chat-GPT stands for Transformer, which is Google’s revolutionary architecture that brings the concept of “self-attention” to AI? And that Google pioneered silicon for deep learning workloads with TPUs? After combing through dozens of technical papers and posts, I summarized my learnings in one visual below.
All the recent AI talk brought back the memory of the fall semester of 2003, when I signed up for a Neural Networks course 👩🎓. After several classes of advanced algebra and calculus, I was excited to see their practical applications in natural language processing and speech recognition use cases. Little did I know that in 2023, computers would not only be able to almost perfectly understand human speech, they would also gain a voice of their own thanks to decision making capability similar to humans.
I majored in Telecommunications Engineering and always found the Open Systems Interconnection Reference Model, more commonly known as the OSI model, extremely useful in visually depicting all the key layers of the networking tech stack. So I thought to myself: what if I build a similar reference model for AI? At the end, at the core of AI lies a neural network. And I’ve successfully demystified a variety of tech stacks using the good ol’ OSI model before, from PaaS to SDN/NFV. Let me know what you think!
And here’s an animated version of “The 6 Layers of Generative AI Technology Stack”. To me, it’s like watching a delicious multi-layer cake being assembled layer by layer, except instead of vanilla cake, lemon custard and cream-cheese frosting, our recipe calls for infrastructure, modeling and application layers as key ingredients. Who knew that a stack of AI layers could be so captivating?
I’m a Marketing Executive, an astute influencer, panelist, and public speaker with recent appearances on Harvard Business Review (HBR). I live and work in San Francisco.