Artificial intelligence is developing rapidly and offers enormous opportunities for productivity improvements. Yet, its implementation within organizations is often difficult and uneven. A one-size-fits-all approach doesn’t work: moving forward with a fit-for-purpose model selection approach is in the essential fitness of things, says Sanjeev Azad, Global Chief Innovator & CTO (APAC) at GlobalLogic.
Successfully integrating AI into organizations and work processes requires more than just providing the right tools. At GlobalLogic, “we take the hype beyond AI tools to an AI-First Product Engineering framework where AI becomes our trusted co-creator,” said Sanjeev Azad, a proponent of responsible use of AI, in an interaction with Enterprise Times.
Enterprise Times: Beyond the hype, how should enterprises decide where to deploy AI, and where not to?
Sanjeev Azad: When enterprises are seeking digital intelligence, it is crucial to look beyond the hype and strategically integrate AI into workflows. AI delivers the strongest value in areas driven by large volumes of unstructured data and repetitive decision-making, such as customer support, marketing intelligence, research summarisation, revenue operations, and data discovery. Successful deployments start with identifying the right use case.
At GlobalLogic, our innovation engine is rooted in persona-centric software development lifecycle (SDLC). Be it the developer, designer, architect, product owner or tester, each persona interacts with standardized AI “actions,” ensuring consistency, speed, and accuracy. When we say that AI is purposefully deployed across every development stage, from experience design to architecture, development, and testing, we take the hype beyond AI tools to an AI-First Product Engineering framework where AI becomes our trusted co-creator.
Having said that, it is equally important to know where not to deploy AI prematurely. If we consider ourselves, GlobalLogic does not align with the “one-size-fits-all” hype. After all, we embed AI with purpose, not presence. Enterprises should be cautious in high-risk, highly regulated, or trust-sensitive environments until reliability, explainability, security, and cost controls are firmly established. So, moving forward with a fit-for-purpose model selection approach becomes crucial. For instance, deployment of high-reasoning LLMs should be prioritised only where needed, say in design or decision-making, and lighter open-source models must be used for generative or repetitive tasks. This optimizes cost, accuracy, and sustainability, avoiding the “BMW for a short commute” syndrome that often drives up enterprise AI bills unnecessarily.
Ultimately, enterprises must prioritise reliability, usefulness, security, scalability, and cost with discipline, because AI succeeds not when it is deployed everywhere, but where it is deployed with readiness and purpose.
Enterprise Times: Agentic AI and ‘Agentic FTEs’ – How does this practically change the roles and workflows inside tech teams?
Sanjeev Azad: When we bring Agentic AI and Agentic FTEs into the conversation, we need to understand how we can primarily differentiate between the two. A defining step by GlobalLogic has been this differentiation, which has been driven by our innovation to conceptualize Agentic FTEs. Instead of building countless, disconnected AI agents, we have built this methodology, where we collate these multiple specialized agents into Agentic Full-Time employees (FTEs). These agents mirror human business functions while retaining human coordination for judgment-based decisions. This very methodology balances automation with accountability, serving as a model for AI-human collaboration in enterprise environments.
The key factor to understand here is that Agentic FTEs are not replacements, but AI-powered co-workers that automate specific skills and not entire jobs. So, the fundamental shift experienced by tech teams lies in the execution of tasks and supervision of intelligent systems. Developers no longer just build models but actively validate, audit, and steer agent behaviour using continuous explainability, bias tracking, and drift monitoring. Engineers move from writing every rule to supervising how agents reason, make decisions, and adapt in production. Product managers, compliance teams, and business users also become active participants in the AI lifecycle rather than downstream reviewers. In practice, Agentic FTEs centralize knowledge across personas, ensuring shared learning and reduced rework.
Enterprise Times: How GlobalLogic is leveraging the power of VelocityAI across different industries to address the most complex tech challenges? – Across healthcare, telecom, EdTech, and energy?
Sanjeev Azad: When we look at how GlobalLogic is applying VelocityAI across industries such as healthcare, telecom, EdTech, and energy, the key shift lies in how engineering complexity is managed, not through isolated automation, but through connected, context-aware intelligence across the software lifecycle.
VelocityAI transforms the SDLC into a persona-centric, intelligence-driven journey, where every role operates within its natural context while remaining continuously aligned to business intent, domain requirements, and regulatory expectations. This approach enables enterprises to achieve 20–70% faster product development cycles, particularly in highly regulated and large-scale environments.
At the core of VelocityAI is our Context-Aware Knowledge Engine (CAKE.AI), which preserves industry and compliance context across all stages of development. This is especially critical in sectors like healthcare and energy, where frameworks such as HIPAA and ISO must be adhered to throughout, and in telecom and EdTech, where rapid innovation must coexist with evolving standards and scale requirements.
Rather than relying on manual handoffs, VelocityAI creates automation with continuity, where requirements guide design, design informs development, and test outcomes feed directly back into optimization. Architects use compliance-validated blueprints, developers accelerate coding and modernization through AI-assisted environments, and QA teams adopt zero-touch testing with traceability built in.
A defining capability is Compliance-as-Code, which converts regulatory frameworks into executable rules embedded across the SDLC. This ensures continuous validation, governance, and auditability without slowing delivery.
In essence, VelocityAI enables enterprises to solve complex technology challenges by harmonizing human creativity with AI-driven automation, allowing teams to deliver faster, compliant, and higher-quality digital solutions across industries.
Enterprise Times: What’s the biggest barrier enterprises face in operationalising AI at scale?
Sanjeev Azad: We talk about “AI at scale,” but we rarely talk about the Incentive Gap. In the Indian corporate context, we have plenty of talent that can build a model, but almost no one knows how to operationalize it into a P&L.
The biggest barrier is that our current KPIs don’t reward the “failure” that comes with AI experimentation. When a pilot fails to show ROI in six months, it’s scrapped. But AI isn’t a plug-and-play appliance; it’s more like an apprentice that needs to be trained on the job. Until we align middle-management incentives with long-term AI adoption—and bridge the gap between the “coders” and the “business owners”—AI will remain a polished slide in a boardroom deck rather than a driver of the bottom line.
Enterprise Times: As AI becomes more autonomous, how should industry players enforce responsibility, compliance, and sustainability at the code level?
Sanjeev Azad: As AI becomes more autonomous, I often see the industry responding with more policies, more frameworks, and more reviews layered on top. But the real question, in my view, is simpler: where does responsibility actually live? If AI is increasingly writing, testing, and running software, then responsibility has to move closer to the code itself. That’s the lens I bring to GlobalLogic as well. We have designed an AI-First, persona-centric SDLC where every role, from developer, architect, designer to even the tester, works with standardized AI actions, so autonomy grows without losing clarity, control, or human judgment.
The same thinking shapes how we handle compliance and sustainability. Across the industry, these are often treated as afterthoughts. At GlobalLogic, we embed them by design. Through Compliance-as-Code, regulatory requirements become real-time checks inside the development pipeline. Accountability is reinforced through Agentic FTEs, where AI is structured like real job roles that automate tasks but escalate decisions that require judgment to humans. Sustainability is treated as an engineering responsibility through initiatives like Captain Code Greenify and fit-for-purpose model selection, which help teams reduce unnecessary compute without compromising performance.
That’s how I believe the industry can ensure that as AI becomes more autonomous, it also becomes more responsible, by making responsibility, compliance, and sustainability part of how the code is written, not just how it’s governed.
