Next Word Explorer
On its most fundamental level, GPT is simply "turbocharged autocomplete". It's a very large deep neural network whose input is all of the words in a conversation (including its own), and whose output is the next word that is likeliest to follow. With the Next Word Explorer presented here, you can investigate its lists of likeliest output word candidates, and see how its choices unfold.
For more visualization and discussion, please read "ChatGPT as a garden of forking paths" by Alejandro Panza.
Embedding Triangulator
Enter a pair of "reference texts". These are two short pieces of text — each a phrase or a sentence. Then, enter a "variable text" and press the button to compare it to both reference texts.
In natural language processing, a text embedding is a long numerical sequence that essentially represents a piece of text encoded as a "thoughtform". The numerical values that comprise an embedding are meaningless in and of themselves, but embeddings that represent similar concepts or ideas will be similar (i.e. proximate) to each other. And, by the same token, embeddings that represent completely unrelated concepts will be very dissimilar (i.e. far apart) from one another.
This visualizer shows how GPT "conceptualizes" ideas expressed in text by comparing the similarities of their embeddings (i.e. the numerical representations of their "thoughtforms"). You can see how similarly GPT perceives two "reference texts" to one another, and how it perceives different "variable texts" relative to those reference texts.
The percentages on the bars of the variable texts aren't supposed to add up to 100%. The left and right percentages show how similar the variable text is to each of the left and right reference texts, respectively. The variable text could be very similar to both (i.e. high percentages on both sides), or to neither (i.e. low percentages on both sides). The difference in these percentages is how we visualize GPT's "judgment".