Site icon Ryadel

Our AI Policy for Software: why we don’t believe in Vibe Coding

La nostra AI policy per il software: perché non crediamo nel Vibe Coding

In recent months, generative AI applied to software development has been discussed extensively. Too often, however, the debate has become almost caricatured: on one side, those who present it as an inevitable revolution capable of writing software almost on its own; on the other, those who dismiss it as a dangerous shortcut, good only for producing mediocre code.

Our position here at Ryadel is more practical and less ideological. We are not opposed to the use of AI in software development; on the contrary, we often find it useful. We use it, test it, evaluate it, and in some cases it genuinely saves us time. Precisely because we use it, though, we have developed a fairly clear conviction: using AI as an assistant is one thing; using it as a substitute for authorship, engineering judgment, and technical responsibility is something else entirely.

For this reason, we have started adding an AI policy to all our GitHub projects. Not as a symbolic gesture, and not to indulge in anti-AI rhetoric, but to put in writing a principle that matters to us: the software we publish is not a product of “vibe coding.” It is software conceived, designed, written, and maintained by human beings, possibly with the help of automated tools, but never by delegating final responsibility to those tools.

We don't do “Vibe Coding”

The phrase may sound polemical, but we are not interested in waging war against a buzzword. What matters to us is making one principle clear. When we say our projects are not the result of “vibe coding,” we mean that we reject the idea of software development as a process where automatically generated output is accepted simply because it looks plausible, compiles, or gives the impression of working.

Anyone who has been developing software for long enough knows that plausibility is not enough. A block of source code may look correct while actually introducing subtle bugs, regressions, vulnerabilities, questionable architectural decisions, or extremely high maintenance costs. And this is already true of code written by human beings. When a generative system enters the picture, producing text that is credible by design but not necessarily correct, the risk inevitably increases.

This is where, in our view, the difference between serious AI usage and superficial AI usage becomes clear: the issue is not that a model generates code; the issue begins when that code is treated as though it were somehow guaranteed by the tool that produced it. It is not.

AI as an Assistant, Not an Author

This is the core principle of our policy, and also the most honest summary of our approach. Generative AI can help in many ways: it can speed up research, produce a first draft of documentation, suggest refactorings, create test scaffolding, highlight possible weak points, compare alternative approaches, or flag vulnerabilities and architectural issues that deserve attention.

There is nothing scandalous about that. Quite the opposite: it would be shortsighted to reject, on principle, a tool that, when used properly, can reduce repetitive work and accelerate some exploratory phases.

What we do not accept is the next step, namely the idea that assistance can become substitution. We do not believe AI can be considered the author of software, at least not in any meaningful sense of the term. The reason is simple: generative AI does not design in the full sense of the word, does not answer for the consequences of its choices, does not own the trade-offs it introduces, and, above all, cannot step in when something goes wrong, often at the worst possible moment.

Put plainly: it is not capable of taking responsibility for what it produces, whether that means a flawed architectural decision, a data leak, or a destructive migration.

That is why we continue to believe that software authorship must remain human. By that, we do not mean that every single line must be written manually, but that its meaning must be understood, its impact evaluated, and its choices defensible by the people who publish it.

Where AI Seems Truly Useful to Us

One reason we are not interested in apocalyptic arguments is that they would be dishonest. In some contexts, AI genuinely helps. Denying that would make little sense. For us, the real issue is using it where it increases productivity without lowering the level of control.

For example, we find it useful when we need to:

  • do preliminary research on a library, feature, or pattern;
  • obtain a first draft of documentation or technical text to refine;
  • evaluate possible refactorings or implementation alternatives;
  • generate initial scaffolding for unit or integration tests;
  • carry out a reasoned review of code that has already been written;
  • look for potential vulnerabilities, code smells, or overlooked edge cases;
  • challenge a solution we are considering, to see whether we are missing any risks.

In all these cases, the value does not lie in pressing a button and accepting the result at face value, but in creating a useful technical dialogue. Sometimes AI confirms a direction, sometimes it suggests a sensible idea; quite often, however, it says trivial things, mixes valid intuitions with inaccurate details, or heads in the wrong direction entirely. That is exactly why, in our view, it remains a support tool, not a source of truth.

Where AI Becomes Dangerous

The critical issue is not the generated output itself, but the way it is received. AI becomes dangerous when it produces code or technical text that nobody really reads, nobody fully verifies, and nobody integrates into the existing architecture with real awareness.

This is the aspect that makes us most skeptical of some of the enthusiasm we see around us. People often talk about speed, but much less about the technical debt that accumulates when blocks of sufficiently plausible code are accepted without real human ownership. And the bill rarely arrives immediately: unnecessary complexity, stylistic inconsistencies, regressions that are hard to diagnose, tests that appear to exist but verify nothing truly useful. These are all forms of technical debt that accumulate silently over weeks or months, only to surface when fixing them becomes far more expensive.

Responsibility Cannot Be Delegated

This is probably the point we care about most. In any serious project, someone must be responsible for the architecture, the behavior of the code, security, privacy, operational impact, migrations, tests, infrastructure, and documentation. For us, that someone cannot be a generative model.

It may sound obvious, but today it needs to be stated clearly, because we see a growing tendency to confuse the ability to produce output with the ability to take responsibility for it. They are two completely different things. If we publish a change or approve a pull request, the responsibility is ours. In the same way, if we accept a refactoring suggested by an AI tool and it introduces a serious bug, the responsibility does not belong to the tool: it still belongs to whoever chose to integrate it without truly understanding it or evaluating its consequences.

From this perspective, AI changes nothing about the implicit pact that has underpinned software development ever since shared source code became a thing: the person who commits a change is also the one who answers for it.

What We Ask of Contributors

Our policy does not exclude people who use AI tools in their workflow. Today, that would be unrealistic, and probably not even useful. Many developers use them to a greater or lesser extent, and we do not see that as a problem in itself.

What we do require, however, is that contributions reflect real human review and real human ownership. We do not want pull requests generated entirely by AI and then poured into the repository without substantial verification. Not because “you can always tell,” but because even when you cannot tell immediately, you almost always feel it later.

In practical terms, every contribution should be:

  • read and understood by the person submitting it;
  • checked for correctness, security, and consistency with the project;
  • validated against the existing architecture, conventions, and documentation;
  • modified where necessary, rather than passively accepted from a generative tool.

To us, this is not an extreme requirement. It is the bare minimum, the minimum level of seriousness we expect from anyone contributing to a software project that is meant to remain maintainable.

How We Intend to Use AI in Our Projects

The answer, ultimately, is simple: we intend to use it a lot, but we intend to use it with discipline. We will use it to speed up secondary tasks, help with analysis, obtain second opinions, improve texts, explore alternatives, review code, and automate some of the more mechanical work. But we will not use it as an excuse to lower the standard of understanding, review, or responsibility.

We have no interest in publishing code that was “produced quickly” if that means losing control over what we are building. We would rather move slightly more slowly and know exactly why a given solution ended up in the project. We would rather use AI to improve the quality of our work than to anesthetize our judgment.

If we had to summarize our position in one sentence, it would be this: we are not opposed to AI-assisted development; we are opposed to treating unverified output as if it were software engineering.

Conclusions

For us, adding an AI policy to a project does not mean making an ideological statement. It means clarifying a working method. It means saying that AI can be part of the process, but not a substitute for human responsibility. It means recognizing the usefulness of the tool without giving up the principles that make software something more than a simple sequence of plausible tokens.

Generative tools will likely continue to improve, and it would be naive to think otherwise. But even assuming they become far more capable than they are today, one question will still remain central for us: who truly understood this code, who decided to include it, and who is accountable for it?

As long as the answer to that question remains “a human being,” AI can play a useful role. If the answer starts becoming vague, automatic, or stripped of accountability, then we are no longer talking about a tool that helps software development, but about a gradual surrender of an essential part of the craft.

Exit mobile version