Andrew Conru's 1997 Ph.D. thesis on using agent-based collaboration for cable harness routing has some fascinating parallels to how we might approach building more capable machine learning systems today, particularly with large language models or LLMs.

One of the key ideas was decomposing the complex routing problem into more manageable subproblems and having specialized agents work on different aspects. The system had creator agents to generate initial designs, mutator agents to tweak and improve existing designs, and combiner agents to merge the best parts of different solutions. This breakdown allowed the system to tackle the enormous search space more efficiently.

I see an analogy here to using multiple specialized machine learning models, including LLMs, that are trained for particular tasks and that can exchange information and build upon each other's outputs. Just as our routing agents shared design fragments to be reused and recombined, these ML agents could share relevant context, partial solutions, and generated content to collaboratively solve problems.

The genetic algorithms used, where design fragments from the best solutions were preferentially selected and recombined, were a way of reusing and building upon promising partial solutions. There's a parallel here to how language models like LLMs learn and reuse patterns and knowledge fragments from their training data. Techniques like fine-tuning and prompt engineering are ways of guiding these models to combine their learned fragments in specific ways for particular tasks.

At a more abstract level, you can view the cable harness routing problem as a sort of knowledge tree, where the nodes represent concepts or partial designs and the connections represent relationships or transformations between them. The goal is to find good paths through this tree that satisfy the constraints and optimize the objectives. Our creator, mutator, and combiner agents essentially traversed and manipulated this tree in their own ways.

Similarly, you can view the knowledge in a language model as forming an enormous, implicit knowledge graph or tree. The model encodes a vast web of concepts and relationships learned from the training data. When you query the model, it activates and traverses a portion of this knowledge tree to generate a response. Different prompts or inputs lead it down different paths and produce different combinations of its learned fragments.

So in a sense, an LLM is already a kind of combiner agent, one that can flexibly recombine its learned knowledge in response to different contexts and goals. If you had multiple LLMs or other ML models specialized for different tasks, they could work together by activating different parts of their knowledge graphs and sharing the resulting context and outputs, similar to how our routing agents shared design fragments.


If you've ever looked under the hood of a car or inside a sophisticated piece of machinery, you've probably seen these intricate webs of wires and cables connecting all the electrical components. Figuring out the optimal paths to route these harnesses is a deceptively tricky combinatorial optimization problem with a huge number of possibilities to consider.

At the time, there weren't really any off-the-shelf CAD tools suitable for this task. Everything was still pretty primitive in terms of 3D modeling and simulation software. So my first challenge was basically to create a 3D CAD system from scratch capable of representing the cable harnesses and environments and allowing for interactive manipulation. That was a ton of gruntwork - I remember spending countless late nights hacking away at low-level graphics and geometry code. But hey, it was all part of the fun and the learning process.

Once I had the basic CAD functionality in place, I could start experimenting with different optimization strategies. I knew trying to search the whole solution space exhaustively was hopeless - there were just way too many possibilities, and evaluating each candidate design took significant computation. So I had to get creative with some heuristics and approximations.

The key insight was to break the problem down into two levels and have different specialized agents attacking each part. At the high level, we had the topology problem of finding good overall structures or skeletons for the cable harnesses. Then at the lower level, we had the routing problem of finding the exact paths for each cable bundle within a given topology.

I used genetic algorithms to evolve candidate topologies, with a bunch of mutation and crossover operators to generate new designs based on the most promising ones found so far. Then for each topology, a separate set of agents went to work on the routing problem, using heuristics like trying to follow the shortest paths and stay away from obstacles. The trick was allowing these agents to share information and collaborate - so the routing agents could provide feedback to guide the topology search, and the good route fragments found by one agent could be reused by others.

It was a bit of a brute force approach in some ways, with lots of trial and error and not necessarily a ton of deep mathematical theory behind it. But it got the job done and I was able to show some pretty solid results in terms of finding efficient, high-quality cable harness designs much faster than a human could.

Looking back, I'm sure there are plenty of things I would do differently if I was tackling the same problem today, given all the advances in algorithms, computation, and machine learning over the past 25 years. But I'm still proud of the work and I think a lot of the core ideas about problem decomposition, specialized agents, and collaborative optimization are just as relevant today in the world of large language models and AI systems.