Why Socrates is the Key to Design in AI

Everyone can be a programmer because we already draft specs. Specs are the documents that capture the intent and values. And, we use them to explain LLMs how we want the output. But we often there is a miscommunication, that leads to bad and frustrating model responses. So what separates us from the expected result with AI, is our ability to express that intent.

Taking into account that the biggest gap is the communication, the future of great AI systems probably lies in excelling at closing that gap! Product design should adapt to the AI experience.

AI Works Best With Humans

We should design systems that get to the user intent. The most popular method is through direct human clarification. Making humans participate in the agentic loop.

This process pulls out ambiguity and transforms it into clarity, ensuring that the human-LLM alignment on what the intent is, is as strong as possible.

That makes the user experience a kind role of "consultant", where Instead of just giving instructions, you will be answering the model's questions-specially socratic ones as we see later.

The easiest way to do this is to directly ask the user, with any sort of function call. But we should really reflect on whether it is the most efficient way. We must take into account that this can detriment UX, as it requires time and effort, and it's not always worth the cost.

We should build a system that intelligently determines when the model needs user clarification. We need to balance when the information is sufficient, as constantly asking can bother the user.

Despite this, there are other mechanisms to reduce friction. In Clous, we have:

These should be combined with context engineering, so the LLM has all the information clear. The objective is to build a reliable system that is resilient, giving accurate responses no matter the quality of the human input.

Mayéutica: From Silicon Valley to Socrates

Silicon Valley is great at reinterpret old concepts for the business environment—which is still innovative thinking, to be fair. For the last few years, the biggest mantra has been working from First Principles

First principles is often applicable when building solutions that solve the user's fundamental problem; but it's also applicable when interacting with AI. We usually have an idea of what we want to reach, but not how to get there. We have to work backwards to figure out the ultimate destination, and then peel back the layers of how we'll get there.

The way to uncover those layers was explained by Socrates with Mayéutica—the Socratic method of teaching through questions to help a person "give birth" to their own ideas.

Why is it helpful?

Ejemplo práctico:
Estudiante: "La IA va a quitarnos todos los trabajos"
Método tradicional: "No, eso no es cierto porque..."
Método socrático:

"¿Qué quieres decir con 'todos'?" (clarificación)
"¿En la historia, la tecnología ha eliminado o creado empleos?" (evidencia)
"¿Qué trabajos específicamente crees que desaparecerán?" (profundización)
"¿Qué nuevos trabajos podrían aparecer?" (perspectivas)
"¿Qué pasaría si tienes razón?" (consecuencias)

User Manipulation

We must be careful here. At the end of the day, it is a method of manipulation. However, Socrates' manipulation had an ethical goal—"Is it fair to punish someone innocent if everyone believes they are guilty?" In our case (HR), this is a way to manipulate the user so they can understand. This can help managers who make unrealistic or disproportionate requests realize they don't actually need what they're asking for.

Manager: "Necesito contratar a alguien con 10 años de experiencia en Python,
React, AI/ML, que hable 4 idiomas y acepte salario junior."

HR AI: "Entiendo. ¿Cuántos candidatos con ese perfil has encontrado en
los últimos 6 meses?"

Manager: "Ninguno, por eso lo necesito urgente."

HR AI: "Si tuvieras que elegir solo UNA de esas skills como
imprescindible, ¿cuál sería?"

Manager: "Python definitivamente."

HR AI: "¿Y las otras podrían aprenderse en el puesto?"

Manager: "Sí, supongo..."

HR AI: "¿Qué pasaría si contratamos a alguien fuerte en Python con
potencial de aprender lo demás, en vez de esperar indefinidamente
al candidato imposible?"

Manager: "Mmm... tienes razón. Reformulemos el job posting."