Wow, great post! Some of these tweets are hilariously good (like the Peterson and Dawkins meaning of life ones). Creating the images is also a very nice touch which adds to the believability.
I recently fine-tuned the small 117m param GPT-2 to generate new Beatles’ lyrics. Results aren’t quite as good as the tweets, but was still surprised at how well some of the lyrics sound. It would be extremely cool to try fine-tuning the 1.5B param model and see what it comes up with.