begaTalks.AI #2 - AI & Work: How Teams Are Changing
A recap of the second edition of begaTalks.AI - what Andrei Oros (UiPath) presented, what the panel said, and where the 2026 arc goes next.

Event recap - 9 April 2026, UVT, Sala Vasile Pârvan, Timișoara
A short thank you before the recap
Thank you to everyone who joined us last Thursday at Sala Vasile Pârvan for the second edition of begaTalks.AI. Whether you came for the talk, stayed for the panel, or lingered during the networking hour, your questions and the corridor conversations after are exactly what this community is built for.
For those who had to leave early, want to revisit a point, or share the evening with a colleague who couldn't make it, here is a recap - less a transcript, more a map of where we went.
📸 Full photo album from the evening: view on Google Drive - feel free to download and share your own photos with us.
Where this evening fits
begaTalks.AI is Western Romania's hub for AI practitioners and decision makers - where Data & AI meet impact. The series is built on a simple idea: technical depth, business ROI, and strategic perspective belong in the same room.
Edition #1 last October - Production GenAI & Business Impact - sold out twice at 145 people and confirmed that there is serious regional appetite for this kind of conversation. The 2026 cycle is a four-edition arc: People → Technology → Sectors → Foundations. This evening was the People chapter. What happens to teams, roles, and organizations when AI becomes part of daily work - not in theory, but in the practices of organizations already living the transition.

The talk - Andrei Oros, UiPath
"AI & Work: How Teams Are Changing"
Andrei Oros opened the evening with a practitioner's view of what happens inside an organization when automation stops being a side project and becomes part of how work actually gets done. A few of the threads he pulled on:
From UI automations to agentic processes
UiPath's journey - from 2018 UI automations, through API integration, orchestration, intelligent document processing, GenAI-powered automation, to today's agentic processes - is a useful timeline for anyone trying to understand where enterprise automation sits today. The short version: the industry has moved from activities, to workflows, to entire processes. And the composition of a "team" now routinely includes agents, robots, and people working side by side.
Agentic and robotic - both, not either
One of the most concrete framings of the evening: the future is both agentic and robotic. Think of it as left brain / right brain. Robots handle the structured, logical, efficiency-oriented work - systematic processing where determinism matters. Agents handle the creative, intuitive, decision-making work - adaptability, ambiguity, judgment. Real workflows need both.
The vendor-invoice example made the distinction tangible. In a purely robotic workflow, disputes get kicked to humans - error-prone, slow, distracting from business-critical work. In an agentic version, the agent investigates the dispute, communicates with the supplier, asks a human for help when it needs to, and escalates the resolution for human review. Takes minutes, not days. The human is still there - but doing the review, not the leg work.

Trust as the foundation
Andrei was direct about what makes agentic automation viable at enterprise scale: trust is at the core. Four pillars - resilience and reliability, compliance and governance, open architecture, flexible delivery - which cash out practically in things like the UiPath AI Trust Layer: PII and sensitive data filtering, GenAI feature policies, usage telemetry and audit dashboards, context grounding, no data retention or training on customer data when routing through third-party LLMs.
This is the unglamorous part of the stack. It's also the part without which nothing scales.
Organizational maturity - a table worth staring at
Maybe the most quietly provocative slide of the night: a maturity matrix across five roles (Engineering, Product, Support, People/HR, Marketing) and four levels (Unacceptable, Capable, Adoptive, Transformative).
Unacceptable is the team that calls AI coding assistants "too risky" and has never tested AI-generated code. Transformative is the team that ships LLM-powered features, monitors live metrics, builds an AI-first dev pipeline with guardrails and RAG docs that cuts PR cycle time. Most organizations in the room were somewhere in the middle. The useful exercise isn't placing yourself - it's being honest about where each function sits and what Transformative looks like for that function specifically.
Experiment, learn, adapt - repeat
Andrei's preferred change-management loop is almost boring in its simplicity, and that's the point. Experiment. Learn. Adapt. Repeat. Organizations trying to roll out "the AI transformation" as a single program tend to stall. The ones that compound are the ones that run the loop often, cheaply, and with real feedback.
Org ontology - what actually needs to be in place
For agents and robots to actually work in your company, three things need to be legible: your knowledge (data), your processes (operational), and your skills (what the automation needs to do). Most AI adoption problems are ontology problems in disguise. You can't automate what your organization can't articulate.
Jevons paradox - efficiency doesn't reduce consumption
A reminder that deserves to be repeated: technological improvements that make a resource cheaper tend to increase total consumption of that resource, not decrease it. More efficient coding tools mean more code gets written, not less. More efficient LLMs mean more queries, not fewer. Plan for the curve, not for the snapshot - yesterday's frontier model is today's commodity.
Small, local, on-prem
The final thread: small language models running locally or on-prem are closing the gap fast. The cost curve for a given MMLU score is dropping roughly 10× per year. Enterprise-grade performance from models that fit on consumer hardware is no longer hypothetical. For use cases with data sensitivity, latency constraints, or cost pressure, the default is shifting.
Andrei's closing framing stayed with us: move from theoretical knowledge to practical skill. Master the fundamentals of agentic and vibe coding at a level that enables end-to-end, fully functional application delivery.

The panel - "What Actually Works"
Andrei stayed on stage and was joined by:
- Vlad Damian - Head of Delivery, Victory Square Partners
- Adrian Erimescu - CEO, Growceanu; co-founder of Imobiliare.ro, FABER, and ABQ.institute
- Delia Deliu - Senior Lecturer & Researcher, West University of Timișoara; Managing Partner, Analitic Expert; member of the IAASB-PCAOB Working Party
Dan Bugariu moderated.
Four seats, four angles by design - delivery, investment, product, academic research with a governance lens. That mix isn't incidental; it's the shape begaTalks.AI aims for every edition: technical practitioners and strategic decision makers in the same conversation, pushed to be honest with each other.
The premise: between what AI is promising and what teams actually live day-to-day, there is a gap - and the goal of the evening was to explore it without either hype or dismissal. Three threads ran through the conversation.

1. Roles in motion - what's disappearing, what's appearing
The panel opened on the most concrete question possible: what competencies have become non-negotiable that didn't exist in a job description three years ago, and which "classic" skills matter less?
Vlad Damian anchored the conversation in hiring reality. Two decades of watching engineering roles evolve - from Java developer to delivery leader - and the shift is no longer subtle. Organizations now need systems thinkers who decompose complex problems and handle ambiguity, who can prompt, review, and validate AI output with the same rigor they used to apply to code reviews. "Senior" in 2026 means something different than it did in 2023.
Adrian Erimescu pushed the provocation further: if a founder walks in and says "my team is me and five AI agents," do you invest or do you run? The conversation kept coming back to a harder question underneath - what does the human co-founder actually contribute now, and where is the leverage genuinely one-sided?
Andrei Oros talked about new roles appearing inside product teams that wouldn't have been anticipated three years ago - people whose job is to design, evaluate, and maintain the contexts and guardrails that agents operate within. Not a prompt engineer in the 2023 sense. Closer to a product architect for AI-powered workflows.
Delia Deliu brought the angle that the rest of the panel couldn't: what happens to professions built on professional judgment - audit, accounting, legal - when AI does in seconds what people used to do in hours? Entry-level roles where reconciliation and risk assessment used to live are disappearing. But the judgment, the contextual interpretation, the liability that sits with a signature - that stays human. And it stays human because the institutional, legal, and ethical frameworks require it to. A new framework she flagged - Corporate Digital Responsibility - points toward disclosure obligations when AI is involved in key decisions, particularly in regulated professions.
2. Human + AI, day-to-day - what the collaboration actually looks like
From abstract roles to concrete work. How decisions get made when AI is part of the room. Where speed is genuinely gained. Where quality gets quietly renegotiated.
Vlad Damian didn't dodge the hard question: when your team writes code with AI every day, does ownership of the code still exist? Does the craftsmanship - the quiet pride of "I thought this through" - survive? His honest answer: the locus of craft is moving. It's less about the lines of code and more about the design, the review, the trade-offs. The people who'll do well are the ones who get more deliberate about that shift, not the ones who either resist the tools or surrender judgment to them.

Adrian Erimescu, observing from the investor's seat, described the pattern that separates teams using AI well from teams still working manually: the good ones don't just run faster - they run differently. They build agents for recurring judgment tasks. They use AI for idea validation before meetings, so meetings start further down the thinking path. They prototype extensively and kill most prototypes without ceremony.
Andrei Oros, from the agentic-automation seat, described daily collaboration as fundamentally about clear delegation. Which decisions does the agent own outright (confidence above threshold), which does it escalate with full context attached, which does the human own end-to-end? The teams that answer those questions explicitly outperform the teams that leave it implicit.
Delia Deliu brought one of the most memorable angles of the evening: cognitive dissonance AI - the discomfort that arises when a professional's judgment collides with what an AI system suggests. In audit, where judgment is the product, what do you do when the model says your conclusion is wrong? Do you accept? Do you dig in? The discomfort itself, she argued, is useful - it sharpens critical thinking, provided you don't outsource the thinking to the tool. The risk isn't AI being wrong. The risk is delegating too much and losing the ability to independently validate.
Several practitioners in the room flagged the same warning signs in their own practice: frustration when a model hits a credit limit, or when a preferred model isn't available - signals of unhealthy dependency. One theme crystallized: AI First, but not Human Last.
3. Culture, resistance, and the elephant in the room
The final thread went where these conversations usually don't: the human side. Fear of replacement. Resistance to tools. Leadership that accelerates or slows adoption - often without realizing which it's doing.
Vlad Damian described the reaction he'd watched inside engineering teams when AI tools were introduced - a mix of enthusiasm, skepticism, and quiet fear. The decisive factor wasn't the tooling; it was how leadership framed the change. Teams that were told "this is how you get more done" responded differently than teams that heard "this is how we stay relevant."
Adrian Erimescu, drawing on 21 years of building and leading Imobiliare.ro, named the first cultural mistake he'd avoid if he were introducing AI into a 50-person company today: treating it as a tooling rollout instead of a rewrite of how decisions get made. The hardest part of transformation is almost never the technology. It's the meetings, the incentives, the definitions of what "good work" looks like.

Then the question everyone in the room was thinking but wasn't sure would get asked: UiPath builds tools that automate work people used to do. Has automation already translated into role reductions in the enterprises you work with?
Andrei Oros answered honestly - and with a distinction worth repeating. Yes, certain roles have been reduced. But inside UiPath, that reduction has largely meant mobility, not exit: people moving into different parts of the company, taking on work that the automation created room for, rather than being pushed out. It's not a universal story - context matters, and the answer isn't always the same in every company - but it was a useful counterweight to the narrative that automation is purely subtractive.
Delia Deliu closed the thread with the question she faces on both sides of the lectern: do organizations have a responsibility toward the people whose role changes or disappears because of AI? Or is it simply evolution, and everyone manages on their own? The answer, in her reading, is that the responsibility exists - legally in some jurisdictions, ethically everywhere - and pretending otherwise is itself a governance choice. Also the more practical version of the same question: how do you prepare a student today for a profession that may look fundamentally different in five years? You prepare them for the judgment that remains, not the tasks that are leaving.
What the room took home
A few things kept coming up, both on stage and in the corridor after:
- Audit the process before you automate it. Messy processes get worse at scale - not better.
- Use AI for idea validation before meetings. Start the meeting further down the thinking path.
- Build agents for recurring judgment tasks. Shift focus toward governance, guardrails, and validation design.
- Prototype extensively, then kill most prototypes. Without ceremony.
- Externalize tacit knowledge into shared AI contexts. That's how individual know-how becomes organizational capital.
- Watch for dependency signals. Frustration over credit limits or unavailable models is a warning sign, not just a productivity friction.
- Preserve your distinctive voice in AI-assisted outputs. The tool is an instrument, not a colleague.
- Complete nearly-finished personal projects with AI as an accelerant. Skill compounds through shipping.

Closing thought
The question is no longer "do we adopt AI or not." The real question is how you build the organization that makes AI a real advantage - and that requires decisions about structure, culture, and governance, not just new tools.
Thank you
begaTalks.AI is community-owned and ecosystem-powered. It exists because a group of organizations in the region keep choosing to make it happen together - not as sponsors, but as partners who shape the agenda and carry the work.
Thank you to our core partners - Victory Square Partners, DevPlant, Haufe Group - and to our ecosystem partners - ABQ.institute, UVT, UPT, FABER - with a special thanks to UVT for opening Sala Vasile Pârvan to us.
To Andrei Oros, Vlad Damian, Adrian Erimescu, and Delia Deliu - for showing up with honesty rather than talking points.
And to everyone who made the time to be there - see you at the next one.

What's next
Edition #3 comes in May, as part of HackTM Week - the second chapter of the 2026 arc, moving from People into Technology.
Theme: AI Agents - Building the Next Wave From "how teams are changing" into the technology itself: agentic architectures, the evolving stack, real deployments in production, and an honest look at where agentic AI still breaks.
After that, later in 2026, the arc continues with Sectors (AI beyond tech - healthcare, education, manufacturing, public services) and closes with Foundations (data, governance, regional capacity).
Follow us on LinkedIn (linkedin.com/company/begatalksai) or at begatalks.ai to be the first to get the date and registration link.
Until then - happy building.

