February 9th, 2026
This year is interesting: The only change in the model from last week to this week was the reveal of who won the DGA. It was Paul Thomas Anderson, director of One Battle After Another. The drastic shift in probabilities shows you just how important this one award is. The film is also bolstered by the fact that it has been winning among critics’ organizations for months, but you really get a picture here for how crucial winning the DGA is. What will be interesting is if the big awards in the next few weeks, such as the Producers Guild Awards, differ.
And, as always, don’t count out the Academy doing something wild. It’s been a few years since a major surprise (CODA, at the 94th Oscars), and things that are a 1-in-4 chance (what the rest of the field has against One Battle After Another) happen all the time….
Read More
I wrote a post last year looking at how to employ tools in LangChain to have GPT-3.5 Turbo access information on the web, outside of its training data.
The purpose of the present post is to revisit this post, improving the poor performance I saw there through refactoring and prompt engineering.
Background
The motivating example is again using large language models (LLMs) to help me calculate features for my Oscar model. Specifically: How many films did the director of a Best Picture nominee direct before the nominated film?
Read More
I wrote a post in July 2023 describing my process for building a supervised text classification pipeline. In short, the process first involves reading the text, writing a thematic content coding guide, and having humans label text. Then, I define a variety of ways to pre-process text (e.g., word vs. word-and-bigram tokenizing, stemming vs. not, stop words vs. not, filtering on the number of times a word had to appear in the corpus) in a workflowset. Then, I run these different pre-processors through different standard models: elastic net, XGBoost, random forest, etc. Each class of text has its own model, so I would run this pipeline five times if there were five topics in the text. Importantly, this is not natural language processing (NLP), as it was a bag-of-words approach.
The idea was to leverage the domain knowledge of the experts on my team through content coding, and then scaling it up using a machine learning pipeline. In the post, I bemoaned how most of the “NLP” or “AI-driven” tools I had tested did not do very well. The tools I was thinking of were all web-based, point-and-click applications that I had tried out since about 2018, and they usually were unsupervised.
We are in a wildly different environment now when it comes to analyzing text than we were even a few years ago. I am revisiting that post to explore alternate routes to classifying text. I will use the same data as I did in that post: 720 Letterboxd reviews of Wes Anderson’s film Asteroid City. There is only one code: Did the review discuss Wes Anderson’s unique visual style (1) or not (0)? I hand-labeled all of these on one afternoon to give me a supervised dataset to play with.
Read More