Yes, it can memorize short phrases similar to how it "remembers" words. It's trained on a web corpus that includes and emphasizes Wikipedia. The model is big enough to memorize some things, though not in such a way that they can reliably be retrieved, and it will make stuff up when it doesn't remember. So it's not Google but sometimes it's reminiscent.
Here is a Q&A conversation where I found some things it "learned".
Richard Feynman was reported to have said: "What I cannot create, I do not understand,"
How does that happen? Does the model actually encode a bunch of complete fragments of text?