Hi, this is Jerry with a reflection inspired by the Pragmatic Engineer article “Two Years of Using AI in Software Engineering.” In this post, I break down how my journey with GenAI tools has paralleled, diverged from, and built on the industry-wide experiences covered in Gergely Orosz’s and Birgitta Böckeler’s deep dive.
While their article provides a sweeping overview of tooling evolution, mental models, and workflows, my goal is to share the lived experience of applying these tools on the ground: the messy, exploratory, and sometimes magical transition to working with AI as a daily collaborator.
Evolution of Tools in Our Context
Our first serious encounter with AI coding assistance came through GitHub Copilot. It felt like autocomplete had been supercharged. It wasn’t always correct, but it was fast. We began using it to scaffold entire classes, lean on Tab
to auto-complete logical structures, and then quickly modify what it got wrong. It became a frictionless way to get a first draft, and while it wasn’t revolutionary, it undoubtedly sped up our daily work.
From late 2023 to mid-2024, we expanded our usage to ChatGPT. It evolved from a coding assistant into a sounding board for architectural discussions, ideation, and stream-of-consciousness problem-solving. It became a rubber duck with an encyclopedic memory. We learned to pair our instincts and architectural expertise with its suggestions—always verifying, never assuming.
This evolved again with our embrace of Deep Research tooling. We started using GenAI to compare technical approaches, select libraries, validate direction, and clarify complex topics. Often, we’d vet AI responses with a second source—usually Perplexity—to ensure accuracy.
At the start of 2025, we adopted Cursor as a team. That became a turning point. We stopped thinking of ourselves as just coders and started acting more like orchestration engineers—designing, verifying, and directing. Cursor, paired with Claude Code, Codex, and tools like Manus and n8n, helped us adopt a mantra: “There’s always a better and faster way to work than I did yesterday.”
This shift wasn’t just about productivity. It was about admitting we didn’t have all the answers. Once we embraced that, the gains became exponential.
Of course, not every tool was flawless. Cursor and GPT-based tools sometimes spiral into over-engineered solutions. They can be aggressive with code rewrites and break working systems. To combat this, we developed layers of rules, checkpoints, incremental commits, and a religious commitment to KISS principles. No single practice solved everything—but each added resilience. And over time, its evolving into something powerful
💡 Case Study Break:
When Fathers’ UpLift needed to unify their fragmented SaaS tools into a custom EHR system, Digital Scientists delivered a complete, HIPAA-compliant solution in just 8 weeks—boosting reimbursements and freeing staff from administrative headaches.
Working with AI: What Clicked
We’ve come to think of AI as a highly capable junior developer: a brilliant mind with encyclopedic recall, but prone to rabbit-holing, overengineering, and making small mistakes that can cascade if left unchecked. Even the most advanced agentic tools today still require guidance, structure, and feedback. They need a manager—and that manager is us.
This mental shift was critical. Rather than viewing the AI as a tool we “use,” we began to see it as a collaborator we direct. And that meant retraining our mindset:
- Think in parallel: Delegate subtasks wherever possible. Your concurrency becomes your productivity gain.
- Be a great manager: Provide the AI with precise context, appropriate resources, and detailed feedback in every loop.
- Context is currency: The more specific the prompt, the more efficient the result. Clarity saves time and tokens.
- Log, commit, checkpoint: Ask the AI to document its work, log memory, and commit often. Build audit trails that allow reversibility.
One of the biggest unlocks was discipline. We started prompting like pair programmers, not taskmasters. We insisted on logging every step, documenting every decision, and applying rigorous review standards to all AI-generated code. We kept tests tight, practiced TDD where possible, and refused to cut corners.
The results? Game-changing. We routinely completed weeks of work in hours, with better test coverage than ever before. But the moment we let go of oversight or accepted complexity creep, the pendulum swung the other way. Overengineered solutions snuck in. The AI doesn’t default to first principles or simplicity unless explicitly told.
In this AGI/ASI-adjacent era, re-learning is now part of the job. The entire work experience evolves weekly. And success means leaning into that evolution without surrendering critical thinking.
Our Workflow Today
By 2025, AI tooling isn’t something we reach for occasionally—it’s embedded in the way we work. But rather than enforcing rigid rules, we embrace exploration. Each engineer on our team is a pioneer in their own right. The tooling ecosystem is moving too quickly for static standards to hold up—so we instead create space for experimentation.
Everyone runs their own tool experiments, from Claude Code to Cursor, Operator, DeepResearch, and Perplexity. Once a week, we gather to share learnings, insights, and failures. This ongoing sync ensures we’re learning together, even if our approaches are divergent.
To maintain quality, we focus intensely on restraint. Vibe coding might feel fast, but it can easily introduce noise. We counter this with layered best practices and prompts designed to instill simplicity and incremental thinking. We constantly ask the assistant to simplify and explain: “What’s the simplest way to achieve this? Is this overkill? Could this be reduced?”
Refactoring cycles are routine. Abstractions are scrutinized. We don’t outsource design; we pair with it.
We share prompt snippets, experiment with Model Context Protocol (MCP) tools, and teach each other strategies. But we don’t canonize one stack or method. The terrain changes weekly, and agility matters more than consensus.
And while it may go without saying: we don’t send sensitive data into LLMs, and we isolate runtime environments to prevent scope creep. Evaluation happens via code, tests, and good engineering hygiene—not trust.
In this new mode of work, engineering means directing a set of collaborators—human and AI alike—with clarity, discipline, and a healthy respect for the unknown.
💡 Case Study Break:
In just 9 months, Digital Scientists helped launch Never Alone, a remote patient monitoring platform for older adults. It included a tablet app, care dashboard, and a patent-pending caregiver ecosystem.
Final Thoughts
Reading the Pragmatic Engineer piece was like holding a mirror up to our last two years. It validated many instincts, framed our experiences in clearer terms, and gave us a shared language.
Yes, GenAI changes how we build software. But more importantly, it changes how we think, collaborate, and problem-solve.
We’re not outsourcing engineering to AI. We’re evolving what engineering means.
How Digital Scientists Can Help
Whether you’re a health tech startup, enterprise product leader, or innovation-driven non-profit, Digital Scientists can help you:
- Integrate GenAI into your workflows
- Rapidly prototype AI-powered applications
- Build resilient, scalable, human-centered software solutions
With years of experience in design-led, AI-enhanced software development, we’re ready to be your partner through this transformation.