Jeremy Lloyd

Software Tips

Updated on 2026-03-04.

I’m continuously refining my opinions on how software should be made. Here are my opinions & tips in one place.

I’m not the most qualified person to make this kind of list. I’ve done it anyway, because:

  • I want to curate my thoughts to my colleagues and friends
  • I want a guide to help my AI agents make good decisions
  • Books from experts are great resources for learning, but they are not concise and are less helpful to non-experts.
  • Style guides are common, but they distract from the more important topics of maintainability and reliability.

My recommendations are shaped by my experience working in small (2-10) teams, and developing solo.

Practices

These practices aim to help you quickly develop software that is functional, reliable, maintainable, and enjoyable to use.

Coding

  • Name things explicitly. Readability matters. energy_mwh and energy_market are better than energy, and market1. If a variable’s unit is ambiguous (e.g. energy), append it to the variable name (energy_mwh). Comments are often an indicator that variables, functions and properties haven’t been named meaningfully enough.
  • Code should fail fast and loud, rather than silently. Raise errors (custom errors if necessary for readability) and handle them explicitly. Log errors in a readable and easily accessible format. Don’t return None when attempting to retrieve data from an inaccessible source.
  • Regularly commit unfinished feature branches, rather than committing long-lived branches only after completion. This means testing and merging your progress ~daily. This prevents painful merge conflicts and big-bang releases, keeping your software working and your team moving steadily in the right direction.
  • Refactor for maintainability, continuously. Technical debt should be managed continuously by documenting and upholding high standards. Use design docs (before starting development) and code reviews to prevent maintainability regressions. Additionally, read up on common code smells (duplicate code, bloatedness, shotgun surgery, over-coupling) to help you identify symptoms for change and learn how to improve your code’s maintainability.
  • Use static type hints & checks, rather than omitting types. Use Python type hints, use TypeScript over JavaScript, and use typing when it is an option.
  • Exclude secrets from version control. Store passwords inside a .env file, exclude it from version control, and read the values into your python code when required. Alternatively, deploy your secrets as environment variables in your Docker image, or use a service like 1Password or Azure Key Vault to store them externally.
  • Understand existing conventions before changing them. Project-specific conventions can improve maintainability.

Testing

Tests should be reliable, comprehensive, and fast to run.

  • Write automated tests alongside your code. They’re critical to prevent your code from regressing, particularly in the early age of AI.
  • Your tests should cover between 90% and 100% of your application code. I encourage writing your tests before writing your application code to maintain high coverage, encourage simple function inputs/outputs, and encourage readable names.
  • Don’t test everything. Ignore tests for non-critical log messages and UI appearance if it increases speed and maintainability.
  • Keep tests quick. Follow the test pyramid. Use Fake implementations, rather than mocking the real implementations. Mocking is generally an anti-pattern, suggesting that your application should be re-architected to separate mocked behaviour from the function being tested.
  • Keep tests simple. If one test function is testing multiple different concerns or if they require significant set-up code before the tested function can be run, it’s a symptom that your code can be re-architected to separate those concerns. Each test should use the smallest amount of data necessary.
  • Prefer testing UI logic and state, rather than directly testing the UI appearance through browser/render-based testing (Selenium/Playwright). Logic and state tests are quick, but browser-based UI tests are often slow and easy to make overly specific. I don’t think they add much value, and contribute to a lot of the negative opinions on TDD.

Architecture

  • Follow Clean Architecture. There should be separate parts of your codebase responsible for inputs/outputs, business logic, storage, and user interface. Organise your application by these processes - this could mean different functions, files or folders depending on how big your application is.
  • Define a service layer function for each use case. These functions should be thin, with most of the application code living in other modules/functions responsible for inputs, business logic, DB and UI interactions.
  • Prevent errors as close as possible to their source. When errors come up, your instinct is to adjust the code at the line the error was raised. Instead, consider how that error could have been prevented as close as possible to the origin. Can we redesign the code such that the error won’t occur? Can we rename variables to reduce misunderstanding? Similarly, static type checking > dynamic type parsing/validation > unit tests > integration tests > manual tests > production error 🔥
  • Avoid vendor lock-in when you can. Be wary if you’re betting your long-term infrastructure decisions on closed-source technologies.

Continuous Delivery

  • Containerise application deployments, pinning specific package versions to ensure your application behaves the same wherever it is deployed.
  • Optimise your build/test/deploy cycle time. Often a case of Tragedy of the Commons, it’s easy for individuals to make small changes that balloon your cycle time, making it painful for everyone to work on. Standards and measurement are critical to keep it in check.
  • Prevent deploying unless tests pass. If your tests are flaky due to system time, test data, external dependencies, then they can probably be rewritten. Allow test failure when tests are solely dependent on external dependencies.
  • Prevent production deployments until tests pass, code is reviewed, and functional testing is completed in a production-like staging environment.
  • Define your entire application as code. As of 2026, each of the following operations should be simplified into a one-line command and generally executed automatically: linting, formatting, unit/integration testing, deployment, infrastructure procurement, database schema changes, versioning/releasing and rollback.
  • Automate common operations.
  • Apply the principle of least privilege to service accounts and API keys. Expand read-only access across your team for transparency/learning, without compromising security.

Organisation and Culture

  • Organise teams by product, not by expertise (Dev, Ops, Infra). Products move faster when they have all required capabilities working in close proximity.
  • Fight for simple designs and solutions. Most “requirements” are unnecessary. Complexity is easy to permit, particularly if you’re an agreeable person. It takes discipline to say no and prevent the long-term maintainability burden.
  • Review changes against a shared Definition of Done. This simplifies decision-making by new joiners & reviewers (AI & human), reduces work in progress, and gives everyone a stake in maintaining shared assets (logs, test quality, etc.).
  • Facilitate regular knowledge sharing and learning. Favour communication methods that are in high proximity to where the information is used, such as variable naming/typing, comments, tests, and code reviews.
  • Almost never mandate tools and practices. Smart practitioners will naturally gravitate to better tools - if they disagree with your existing practices, it could mean one of the following:
    • Values misalignment -> agree on what matters
    • Significant context difference -> understand the differences, and the benefits of the new tool in that context
    • Misunderstanding -> educate each other
    • The tool is better -> decide between maintaining consistency and adopting the new tool

Applying the practices

To humans

  • Be consistent. Don’t expect colleagues to follow a code review process if you ignore it yourself.
  • Publish your standards. They should be accessible and understandable, allowing others to research and practice them autonomously.
  • Don’t neglect synchronous communication. Code reviews, group design sessions and pair programming/training help newbies in ways that docs and code cannot.
  • Review them regularly.

To agents

  • Summarise your standards in an AGENTS.md file.
  • Include sections with specific commands for each of the following:
    • Commands
    • Testing
    • Project structure
    • Code style
    • Git workflow
    • Boundaries

Some example agent templates are below:

Find some more tips and examples in this article.

Conclusion

This all sounds like a fair bit of work. And it is… so use your common sense.

Your project may not benefit from all of these. But do learn about them and give them a try yourself if they’re new for you. The point is to understand the benefits of these practices, and use them proactively when relevant to your project. You don’t need tests if you’re trying out a new library, and you don’t need daily merges if you’re working solo.

These practices have been distilled from my experience and the following sources:

  • The DORA Research Program, breaking down the factors influencing software team success.
  • The Phoenix Project - a story that introduces the components and benefits of Continuous Delivery.
  • Refactoring - a reference of ways to change your code in small increments, and putting a name to many things you do already.
  • Software Architecture
  • Clean Code contains many great principles, if you can see past the Java-specific chapters and overly dogmatic takes on function length and class use.
  • These conference talks from Raymond Hettinger and Brandon Rhodes are timeless. I just keep coming back to them.
  • Octopus Energy’s public conventions are interesting for others in the energy industry. I like its level of detail but I don’t use the Django web framework so I disagree with some of its contents (thin data models, ActiveRecord pattern).

Bonus: Tools I prefer, as of 2026

  • VSCode, with few extensions.
  • ChatGPT & Codex for AI, calling on Grok when I need something a little more honest

Infrastructure

  • GitLab for Version Control & Continuous Delivery. GitLab’s free tier is generous, and I don’t need GitHub to publish anything open-source.
  • Terraform for infrastructure procurement.
  • Docker for containerisation
  • PostgreSQL for storage, Supabase if the Auth/API are helpful for the use case
  • Azure for cloud infra. It’s not a strong opinion, any of the big providers have everything I need.
  • Grafana stack for application monitoring.
  • Vercel for frontend/CDN deployment.

Python

  • pytest for testing
  • uv for dependency management, versioning and packaging
  • ruff for python linting & formatting
  • ty for static type checking
  • Pydantic & SQLModel for modelling & runtime validation
  • Alembic for DB schema migrations
  • FastAPI/uvicorn for web APIs
  • Pyomo for optimisation modelling
  • PyTorch for ML, though I haven’t used it in a few years.

JavaScript

Thanks for reading! I'd love to hear what you thought - shoot me a DM on LinkedIn.

© 2025 Jeremy Lloyd. All rights reserved.