Practical Guide: Working with AI Tools as a Developer

Boundaries, workflows, and expectations for AI-assisted development.


The Golden Rule

You are the author. You are accountable.

If you used AI to generate code, documentation, or analysis - and it is wrong, insecure, biased, or breaks something - that is on you. Not the tool. You submitted it. You own it.

This is not punitive. It is the same standard we hold for any tool.

Using AI is like delegating work to a junior team member. It can move fast and produce useful output, but it still requires your review, judgment, and sign-off. The final responsibility sits with you.


What AI Is Good For (Appropriate To Use Here)

What AI Is Bad At (Be Cautious Here)

What AI Must Not Be Used For


The Workflow: How to Actually Use AI Well

1. Think First, Then Prompt

Before asking AI anything, spend enough time forming your own mental model of the problem. What are the constraints? What approaches come to mind? Trying to answer a question yourself, even imperfectly, before consulting AI produces better understanding and retention.

2. Decide How You Want to Be Assisted

Matching your mode of assistance to the task matters. Some options to consider:

A note on agentic coding: As noted in the ethical framework, agentic coding may erode understanding of a codebase and actually reduce productivity. If you use agentic mode, keep tasks small and reviewable, check in regularly, and don't walk away expecting a finished feature. If the agent is struggling after two or three iterations, step in and debug manually rather than looping indefinitely.

3. Write a Clear Spec, Not a Vague Request

The more precise your prompt, the more useful the output.

4. Give AI a Way to Verify Its Own Work

Where possible, set the AI up with a feedback loop - a way to run tests, check output, or validate behavior. This significantly improves result quality. Without verification, the AI is guessing. With it, it can iterate until things actually work. This might mean running a test suite, executing a bash command that confirms expected behavior, or checking a UI in a browser. The form of testing matters less than its existence.

5. Review Everything

6. Keep Changes Small

Do not ask AI to generate an entire feature in one shot. Break work into small, reviewable increments. Smaller steps are easier to steer, easier to understand, and easier to debug. This mirrors good engineering practice regardless of whether AI is involved.

If you catch the AI going in the wrong direction, it's better to catch it early than after it has compounded the mistake across many files.

7. Label AI-Assisted Work

When committing code or submitting PRs that include substantial AI-generated content, note it in a Git trailer:

Assisted-by: [tool name, e.g. Claude, Copilot]

This is not about shame. It is about transparency and helping reviewers calibrate their attention. It is also becoming standard practice in major open source projects (LLVM, QGIS, Drupal, Fedora).

8. Do Not Iterate Blindly

If AI gives you broken code and you keep pasting the error back in hoping it will fix itself - stop. After two or three failed attempts, step back and debug manually. Blindly looping with an AI wastes time and teaches you nothing.


Contributing to Open Source with AI

If your work involves contributing to upstream open source projects, additional care is required.

Respect Maintainer Time

Every PR you submit costs someone time to review. Before submitting AI-assisted contributions:

Do Not Chase Volume

One excellent, well-tested PR is worth more than ten AI-generated patches that each require maintainer effort to evaluate. Quality over quantity. Always.

Prefer Existing Libraries

LLMs have a strong tendency to generate bespoke implementations rather than using established libraries. Before accepting AI-generated code that solves a common problem, ask: is there an existing library the project already uses for this? If yes, use it.

Do Not Use AI for "Good First Issues"

Several projects (including LLVM) explicitly forbid this. These issues exist as learning opportunities for new contributors. Using AI to solve them defeats the purpose and displaces human growth - even if the project has not formalised the rule.


Security Checklist for AI-Generated Code

Before merging any AI-generated code, verify:


Protecting Your Own Skills

AI can make you faster, but it can also make you dependent if you are not deliberate about learning.


Approved Tools and Data Boundaries

Category Guidance
Code completion (e.g. Copilot, Cursor) Permitted for non-sensitive code. Review all suggestions.
Chat-based AI (e.g. Claude, ChatGPT) Permitted for general development questions. Never paste sensitive data.
AI code review tools Only if they keep a human in the loop. Automated review bots may post comments, but cannot change content without human approval.
AI agents (autonomous) Not permitted to update code directly on shared platforms (e.g. within PRs) without explicit human approval of each action. May be used locally (e.g. in a developer's IDE) as long as workflows are manually approved and final code is quality controlled.

In summary: do not allow automated commits from AI agents. Most other usage is permitted, with the boundaries described above.


Policy Review

This guide should be reviewed every three months or whenever a significant change occurs in AI tooling, team composition, or organisational risk posture. AI capabilities are evolving rapidly; our practices must keep pace.