Features
A wide shot of the "Build with AI" workshop at The Company Cebu, showing participants seated at laptops, deeply engaged in hands-on AI development. The room is bright and collaborative, with cables and equipment spread across tables. A facilitator presents at the front, introducing advanced agentic AI concepts. The scene reflects a transformative atmosphere—participants are building generative agents and multi-agent systems with Google's Agent Development Kit, bridging local insight and global technologies.

Build with AI Workshop Cebu Empowers Local Tech Community Through Hands-On Innovation

A wide shot of the "Build with AI" workshop at The Company Cebu, showing participants seated at laptops, deeply engaged in hands-on AI development. The room is bright and collaborative, with cables and equipment spread across tables. A facilitator presents at the front, introducing advanced agentic AI concepts. The scene reflects a transformative atmosphere—participants are building generative agents and multi-agent systems with Google's Agent Development Kit, bridging local insight and global technologies.

Participants at AI Pilipinas Cebu’s “Build with AI” workshop at The Company Cebu dive into hands-on agent development using Google’s Agent Development Kit, marking a pivotal moment for local AI innovation.

Our event space in Mandaue held its breath that Saturday afternoon, with thirty-seven laptops arranged across closely-knit space, their power cords threading between chairs like synapses feeding some distributed intelligence. The Build with AI workshop in Cebu, hosted by AI Pilipinas on May 17, 2025, brought together a diverse range of participants for a hands-on journey into agentic AI. I watched participants settle into their seats—most carrying little more than curiosity about programming—and felt that particular stillness that precedes transformation. By mid-afternoon, something unprecedented was taking shape before my eyes. These were individuals across diverse backgrounds deploying autonomous AI systems of startling inventiveness, with systems that could coordinate across specialized roles and handle complex business logic with the precision of production software.

What The Company witnessed in AI Pilipinas’ Jerel Velarde was not the familiar rhythm of incremental progress, but something approaching the alchemical. He opened the workshop with what revealed itself to be a distillation of four decades of artificial intelligence research that transformed into capabilities ready for immediate deployment. Participants were not simply learning tools; they were practicing the patterns that shaped the compounding work of entire research communities, building systems capable of autonomous decision-making and persistent pursuits of goals. As I glanced at their screens, a deeper realization began to form: we were witnessing the emergence of what can only be described as post-comprehension capabilities.

The workshop was not mere democratization of tools; that comfortable narrative where complexity gradually yields to simplicity until anyone can participate. Instead, the room revealed something more unsettling and infinitely more consequential: the ability to deploy sophisticated knowledge without traditional understanding, where abstraction layers enable genuine competence while concealing the foundational principles that make such competence possible. I found myself thinking of the protective moats that technical complexity once provided, watching them dissolve in real time.

For the founders and investors who might have been sitting alongside me, the implications would have been immediate and visceral. Markets previously sealed behind technical barriers were opening before our eyes to entrepreneurs who could identify valuable applications without having to build foundational infrastructure. Everyone around me was living proof that competitive advantage was migrating away from technical implementation toward something more elusive across the taste required to identify problems worth solving, and the market insight to discern valuable applications of compressed expertise.

As the afternoon progressed, and I watched these patterns repeat across the room, I began to understand we were witnessing something particularly consequential for regions like Southeast Asia. Here was a pathway for economies traditionally relegated to implementation roles to access sophisticated capabilities without decades of institutional development. The question that lingered as laptops closed and participants filed out was not whether this transformation would continue, but how quickly traditional assumptions about competitive positioning would need updating.

Building on Four Decades of AI Research

I found myself watching participants across the room drag and drop components with casual confidence, each unaware they were implementing architectural patterns from decades of academic inquiry. Mouse cursors moved decisively between “backstory,” “goals,” “memory,” and “tools”, with four of these deceptively simple categories concealing what I discovered as the fundamental elements of autonomous agency that researchers had been wrestling with since the 1970s.

As participants configured the decision-making frameworks of their agents through the interface, I couldn’t help but think of Les Gasser and Alan Bond, who spent the late seventies in windowless computer labs trying to understand how semi-autonomous problem-solving agents could coordinate through knowledge sharing. Their 1988 consolidation of early research in “Readings in Distributed Artificial Intelligence” had established the theoretical foundations for exactly what was now being assembled through point-and-click interfaces. The participants were unknowingly implementing decades of Stan Franklin and Art Graesser’s foundational work, deploying Michael Wooldridge’s operational frameworks without any awareness of their intellectual heritage.

But what struck me as I watched this historical compression unfold wasn’t just witnessing these technical capabilities more accessible. We were observing the emergence of new positions within the market. The participants around me could implement autonomous agents but couldn’t modify the decision architectures, couldn’t anticipate failure modes, couldn’t extend capabilities beyond what the interface permitted.

This creates interesting market dynamics. Those who understand the principles embedded within these abstraction layers can anticipate which capabilities will be commoditized next, identify where current abstractions will prove insufficient, and recognize emerging opportunities that require deeper technical understanding. For founders and investors, this suggests focusing not just on companies deploying these democratized capabilities, but on those building and refining the abstraction layers themselves.

The companies creating these interface layers are positioning themselves to capture disproportionate value as the market scales. Understanding the intellectual heritage these interfaces encode becomes valuable market intelligence for strategic positioning and investment decisions.

Why Agentic Workflows Outperform Traditional AI

As AI Pilipinas’ David Panonce introduced the next session on multi-agents, I sensed the room’s energy shift. What we had witnessed from Jerel’s first parts—where individuals deployed sophisticated autonomous systems—was about to evolve into something more complex. Participants began connecting their individual agents into what Andrew Ng calls “agentic workflows,” and I found myself observing the emergence of computational architectures that mirror the collaboration patterns of effective human organizations.

The distinction Panonce drew between generative AI and agentic systems crystallized as I watched participants implement iterative cycles of planning, execution, testing, and revision. Unlike the reactive, single-pass outputs we’ve grown accustomed to from generative models, these systems could revisit their work, test alternative approaches, and refine their methods based on feedback. The participants were building what amounted to computational approaches to problem-solving that embody the iterative refinement processes that characterize expert performance.

What struck me was not just the sophistication they were achieving, but the performance implications Panonce highlighted. When he mentioned that GPT-3.5 using these agentic workflow patterns consistently outperforms GPT-4 with traditional prompting—achieving 95.1% accuracy compared to 67% on coding benchmarks—I realized we were witnessing something beyond technological progress. The architectural framework was proving more valuable than raw computational power.

As the afternoon progressed, participants began implementing patterns that represent the current frontier of AI research: orchestrator-worker hierarchies where central agents dynamically allocate subtasks to specialized workers, evaluator-optimizer loops where one agent generates solutions while another provides iterative feedback, parallelization strategies where multiple agents evaluate different aspects of complex problems before reaching consensus through structured voting mechanisms.

Watching this unfold, I began to grasp the deeper competitive implications. We weren’t just observing these new AI capabilities because we were also witnessing the emergence of new organizational forms. If computational teams can be assembled this rapidly, if artificial organizations can be architected with the same ease as deploying individual agents, then the fundamental assumptions about how businesses scale, how teams form, and how competitive advantage accumulates may likewise require complete reconsideration.

Implementing Local AI Solutions

As the afternoon sessions deepened, participants weren’t just building autonomous systems or artificial organizations; they were encoding something far more sophisticated into their agents. A participant working on customer service automation paused to explain how her system needed to understand the social choreography embedded in “po” and “opo,” those formal linguistic markers that carry hierarchical and relational information far beyond their literal meaning. Another was configuring decision trees that balanced operational efficiency with cultural expectations about professional interaction.

I found myself watching the emergence of what could only be called vernacular artificial intelligence that understood local contexts from their architectural foundations rather than through surface-level customization. This wasn’t simple localization, the familiar process of translating interfaces or adjusting cultural references. The participants were embedding cultural intelligence as foundational system behavior, creating agents that understood when formality indicated respect versus distance, when directness signaled efficiency versus rudeness, when temporal flexibility reflected adaptability versus disorganization.

Observing these implementations, I began to grasp the strategic implications. We were witnessing the emergence of a new form of competitive differentiation. This was one that global platforms would struggle to replicate. While international AI systems compete on computational power and general capabilities, these locally-embedded systems were accessing competitive advantages rooted in context that couldn’t be easily reverse-engineered or commoditized.

But as I watched participants encode these cultural patterns into algorithmic logic, a deeper realization emerged. Cultural patterns evolve continuously through lived social practice, yet AI systems crystallize them into fixed behavioral rules. The agents being built that afternoon embodied 2024 Filipino business culture, but they would carry those patterns forward unchanged even as the culture itself continued to develop. Cultural intelligence, once algorithmically embedded, risks becoming cultural archaeology.

This mismatch creates what I began to think of as the vernacular trap that regional AI companies must navigate. The more sophisticated the cultural embedding, the more the technology becomes hostage to assumptions about cultural stability that may not hold. For investors and founders, this suggests that competitive advantage from cultural embedding comes with an expiration date, requiring continuous recalibration to maintain relevance as local contexts evolve.

Navigating the Abstraction Trade-off

As the workshop continued into the late afternoon, I began to notice something remarkable beneath the surface excitement. Participants were gaining impressive technical capabilities while becoming part of something larger where Google’s Agent Development Kit had transformed technical complexities into intuitive creation. 

Watching participants customize extensively within these frameworks, I marveled at how they could modify almost everything that mattered for their specific needs, while being freed from the burden of foundational knowledge. They had gained the ability to build sophisticated AI systems, while being liberated from the cognitive overhead of questioning every underlying principle. The more I observed, the more I recognized this as a beautiful inversion in the traditional relationship between capability and complexity.

Historically, technical expertise required mastering underlying principles before gaining meaningful agency over technological tools. But here, masterfully designed abstraction layers had created something more elegant without the years of foundational study typically required. The participants were experiencing what felt like technological wizardry, wielding powers that would have required graduate-level expertise just months earlier.

This enchantment deepened when I overheard conversations between participants about what constituted “agentic AI.” Previously, Google broadened definitions to include rule-based systems, Microsoft presented elegant spectrums from simple responses to full autonomy, Amazon emphasized self-determined execution, OpenAI focused on goal achievement in complex environments, and Anthropic distinguished between workflows and dynamic direction. What Michael Wooldridge had once worried might become terminological confusion had instead become a rich horizon of possibilities.

For founders and investors, this represents another strategic opportunity. Companies can now access sophisticated AI capabilities that would have previously required substantial R&D investment and specialized talent acquisition. The platforms provide not just tools but entire architectural philosophies refined through billions in research, allowing companies to focus their innovation energy on unique market applications rather than foundational infrastructure.

We were witnessing the emergence of a new form of technological partnership—where the most complex decisions are handled by platform providers with unparalleled expertise, freeing entrepreneurs to focus on the creative work of identifying valuable applications and serving specific markets with unprecedented sophistication.

Scaling AI Capability Development

A month later, I found myself reflecting on the patterns that had emerged from that Saturday afternoon in Mandaue. What made the workshop revolutionary wasn’t its technical content. Google’s documentation, after all, was freely available online. The breakthrough lay in the conviction that ordinary people could become AI creators, that four hours of guided practice could compress months of self-study, that building artificial intelligence was a craft that could be taught and learned like any other.

This realization illuminated the broader strategic implications. We weren’t just observing the democratization of AI tools because we were also witnessing proof that these capabilities could be rapidly transferred across professional contexts without traditional technical prerequisites. For founders and investors, this suggests we’re approaching an inflection point where competitive advantage will increasingly depend on speed of capability transfer rather than technical complexity alone.

The participants who filed out of that conference room that Saturday evening carried more than new skills. They carried proof that the future wasn’t something that only happens to you, but something you could build, debug, and deploy by your own volition. In a world increasingly shaped by artificial intelligence, they had claimed what may be the most powerful position possible: creator.

The workshop had ended, but the building was just beginning. New practitioners were teaching machines to think, solving local problems with global technologies, quietly revolutionizing what it meant to live in an AI-powered world. They weren’t waiting for the future to arrive—they were coding it into existence, one agent at a time.

Build with AI was presented by LegalMatch Philippines, powered by Google’s Agent Development Kit, and hosted by The Company Cebu – Mandaue, with partnership from Full Scale and community support from PizzaPy and AI Gen Cebu. Future workshops and community events can be found through AI Pilipinas Cebu’s growing network of local technology organizations.

Works Cited

Anthropic. (2024). Building Effective AI Agents. Retrieved from https://www.anthropic.com/engineering/building-effective-agents

Bond, A. H., & Gasser, L. (Eds.). (1988). Readings in Distributed Artificial Intelligence. Morgan Kaufmann Publishers.

DeepLearning.AI. (2024). Four AI Agent Strategies That Improve GPT-4 and GPT-3.5 Performance. The Batch. Retrieved from https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/

International Foundation for Autonomous Agents and Multiagent Systems. (2024). AAMAS Conference Series. Retrieved from https://aamas2025.org/

TechCrunch. (2025). No one knows what the hell an AI agent is. Retrieved from https://techcrunch.com/2025/03/14/no-one-knows-what-the-hell-an-ai-agent-is/

Wooldridge, M. (2009). An Introduction to MultiAgent Systems (2nd ed.). John Wiley & Sons.