FYI I use my Codex models with Claude code and they work pretty great. It can even pick up on existing conversations w/ Opus and then resume w/ OAI models.
They're weighted differently. Currently it's set to: reposts 2.0x, replies 1.5x, bookmarks 1.2x, likes 1.0x, and clicks 0.5x. These are guestimates based on public info.
It's real embeddings (MiniLM, truncated to 128-dim), so the cosine similarities between tweets are genuine, but it's obviously not X's actual trained model. The relative distances are meaningful, though.
It should technically work (everything is being routed via LiteLLM under the wraps) with Mistral, but it's untested haha. I don't think Mistral's lost the race; it's just that it doesn't seem to be that popular relative to Gemini/Anthropic/OpenAI, so I didn't bother testing it.
The 'Fast'/embedding model, unless the input is absolutely clear, snaps to the closest alternative.
Here's the song it created when I asked Gemini Flash to do it!
It called it, "Cobalt Kinetic" and explained that, "A high-velocity Mixolydian engine fueled by syncopated electronic pulses and glitchy accents, evoking the iridescent blur of hidden magic weaving through a midnight skyscraper canyon."
reply