Leg Landmark Labeler

Place 3 points on the highlighted leg, then Save.
Loading next image…
First load can take a few seconds while predictions run.
📖 First time here? Quick read — what is this and why? (click to collapse)

What this is, in 60 seconds

We build custom CTi knee braces. Every brace fitting starts with a photo of the patient’s leg. To turn that photo into a brace template, somebody has to find three anatomical landmarks on the leg:

That’s a small but skilled task — it’s the kind of thing an orthotist already does by eye when fitting a brace in person. We’re trying to teach a computer to make a first guess at those landmarks, so the operator doing the actual fitting later doesn’t have to start from zero on every photo. They still review and adjust the brace fit — but they don’t do the busywork of placing landmarks from scratch.

How does a computer “learn” this? (playground edition 🎈)

Imagine you’re trying to teach a kid where a knee is on a person.

You wouldn’t hand them a textbook. You’d point at a knee on yourself, then point at a knee in a picture, then circle a knee on a coloring sheet. Every example reinforces “knees look like this, they sit here on the leg, they have this little bump.” Show the kid 50 different legs with the knee circled, and they get pretty good at finding knees on the 51st leg they’ve never seen.

That’s what a “machine learning model” is. Not magic. Not a brain. Just a pattern-matcher that gets better the more examples it sees, with the answer marked.

Your clicks are the answer key. Every time you place a landmark in the right spot, you’re showing the model “see, on a leg that looks like this, the knee is here.” After enough examples from people who actually know what they’re looking at, the model can guess pretty well on legs it’s never seen before.

So what are we actually doing with your clicks?

Right now the model has only seen a few hundred examples. We want to get to thousands. As more good labels come in, the model’s first-guess landmarks get closer to where they should actually be. Eventually the operator just nods and moves on, instead of dragging every dot.

Every 25 saved labels, the model retrains in the background — the next image you label uses the smarter version. You’ll see a yellow banner when this happens.

How does the website “know” the model is ready?

Every 25 saved labels, the server quietly starts the training program in the background. While it’s running, you’ll see a yellow banner + a console that streams the actual training output (epoch counts, loss values, etc.) so you can watch it learn. People keep labeling normally during this; the current model keeps making predictions until the new one finishes.

When training finishes, the server reloads the new model file from disk and bumps the “model vN” number you see in the header. The next image you get is predicted by the new model. There’s no need to refresh the page — the swap is automatic.

What this is NOT

This doesn’t replace anyone. The brace fitting itself — adjusting the template to the patient, judging fit, talking with the patient — that’s still 100% the orthotist’s call. The computer’s job is to skip the tedious “wrestle the photo into position” step that comes before the real work.

You’re teaching it the part that needs an expert eye. The pixel- tracing of the leg outline (the green tint) is handled by a different model that’s already good — that part really is monkey work. Your placements are the high-value clicks.