INTRO

Welcome to the first edition of Level.UP, brought to you by UP.Labs.

The world of AI has never been louder. Every day brings another viral demo, another billion-dollar valuation, another headline promising transformation. 

But when you're the one tasked with actually deploying these — inside real organizations, with real budgets and real consequences — the gap between demo and deployment is where most bets quietly fail.

We're launching Level.UP to help you cross that distance.

Each edition, we’ll be cutting through the noise to examine what's actually working inside complex systems. We’ll look at the patterns, constraints, and hard-won lessons behind operators that move from pilot to revenue (and the ones that don’t).

This week: Why hollowing out your junior bench can create long-term risk, what happens when viral AI demos skip security fundamentals, and how professional sports are becoming an unexpected proving ground for enterprise AI.

Think someone else needs this? Forward it to a friend/colleague navigating the same terrain.

MOVING THE WORLD AHEAD

Scaling AI Without Burning Your Best People

Entry-level roles are vanishing across the US. Job postings have dropped 35% since 2023, and roughly two-fifths of global leaders report cutting or reducing junior positions. 

It’s all part of a gamble that AI will pick up the slack: Trim training budgets, lean harder on senior staff, and let AI absorb the rest. Fewer people, faster output, lower cost. 

But the reality is messier. Senior engineers find themselves stretched across design, testing, stakeholder management, and the constant cleanup of AI systems that sound confident but miss context. Velocity increases, but quality suffers.

One mid-level developer at a Big Tech firm described watching this unfold in real-time: “Without an expert who knows how to prompt and guide it,” he said, “AI is just a supercar with no driver.”

OUR TAKE

When cost reduction is framed as innovation, the consequences materialize quickly: accelerated burnout among senior talent, expanding rework cycles, and a depleted bench of professionals who hold the institutional knowledge that AI cannot replicate. Push this approach further, and you risk an exodus of your most experienced people, taking with them the contextual judgment that no model can reconstruct.

This is a tension we see repeatedly. Organizations rush to deploy AI before articulating what they are genuinely trying to achieve. Efficiency becomes the objective rather than the byproduct of smarter operations. That distinction matters—not just operationally, but in how your organization is valued. Markets reward sustainable capability.

The enterprises navigating this effectively share a common discipline: they treat AI as augmentation, not substitution. They maintain clarity about what they are optimizing for and where human judgment continues to compound value. 

The goal is to deploy AI to elevate what your people can accomplish, not to extract more from fewer people until the model breaks.

Beyond Viral Demos: What It Takes To Deploy AI Securely

Moltbook took the internet by storm this month with an ambitious pitch: a social network built exclusively for AI agents, where bots could exchange code, gossip about their human owners, and communicate in ways humans couldn't fully understand. 

Then the security researchers showed up. 

It turns out the buzzy platform had exposed the private messages of thousands of bots, the email addresses of more than 6,000 users, and over a million credentials.

Cybersecurity firm Wiz, which discovered the vulnerability, called it a textbook example of speed outpacing fundamentals. As cofounder Ami Luttwak put it: "Although it runs very fast, many times people forget the basics of security.” 

Moltbook isn't an outlier. Tools like DeepSeek have been linked to multiple enterprise breaches. AI agents have inadvertently doxxed their owners, leaked Social Security numbers, and even used stored credit card information to make unauthorized purchases.

OUR TAKE

The strategic appeal of open-source AI tools is clear: rapid deployment, cost efficiency, and accessibility. However, when these systems handle proprietary data or customer information, or operate in regulated environments, the calculus shifts fundamentally. Speed without governance creates exposure not just to technical vulnerabilities but also to compliance failures and erosion of stakeholder trust.

Leaders navigating this landscape must move beyond basic adoption questions to more rigorous inquiry: Where does this system reside? What data does it access? Who owns the outputs? What happens in the event of a compromise? Can the system's behavior be audited post-deployment?

There’s also an organizational dimension that warrants attention. Successfully deploying frontier technologies requires building confidence across the enterprise, from the board to the front lines. That confidence emerges when teams demonstrate disciplined evaluation, appropriate tool selection, and a clear understanding of security requirements.

In practice, secure AI isn’t about choosing slower tools. It’s about sequencing. The teams that get this right build guardrails first, then scale with confidence. They have both the agility to innovate and the institutional trust to sustain it.

SCALING UP

Ready to work smarter? Here are the tools we're using to actually get more done:

  • AI exec assistant: Howie can be cc'ed on emails and automatically schedules external meetings based on actual availability.

  • Task delegator: Fireflies AI integrates with Zoom to capture action items when your name is mentioned and auto-creates tasks. Pair it with Reclaim AI to schedule those tasks on your calendar based on priority and due date.

  • Investor pipeline and CRM: Fundingstack and Foundersuite track conversations, funding stages, and follow-up timing in one place.

PRODUCTIVITY POLL

HOT TAKES

Courtesy: Google Cloud

From the Halfpipe to the Production Line. Team USA is using AI to analyze the physics of snowboard tricks in real time, reviewing runs before athletes are back on the chairlift. Google's bet: if systems hold up at 50 mph in extreme conditions, they can translate to manufacturing safety protocols, robotic surgery, and other environments where split-second precision matters. → Read more

Plugins Spook Wall Street. Anthropic's new legal automation tool for Claude triggered a sharp selloff across legal and enterprise software stocks. The disconnect: studies still show AI agents don’t reliably drive revenue, and lawyers get burned by hallucinated case law. Market fear is moving faster than operational reality.→ Read more

Touchdowns That Teach Supply Chain. The NFL's Big Data Bowl turns game film into predictive models. And the implications go well beyond football: spatial intelligence, real-time decision-making, and processing under pressure are the same capabilities reshaping manufacturing, robotics, and physical operations. → Read more

Reply

Avatar

or to participate

Keep Reading