Newsletter Status 2025-09-01
Medium deep link fixes, and the start of an AI experiment
The newsletter now has 20 articles under its belt! We've hit a proof-of-concept milestone that I wanted to get to, where there is enough free content to suggest to new readers what kind of articles they can encounter. There is one remaining subsection I intend to add in the near future; more news when that's ready.
Prev: Newsletter Refurb 2025-08-08
Deep Links on Android
As is ever the case with bootstrapping something new, you make some mistakes and learn some things along the way. Earlier today I cleaned up a bunch of inter-article reference links on the Medium version of the publication. Getting Android deep linking to work for Medium is a little fiddly and I had several articles go out that got it wrong. If you experienced being bounced out of the Medium app to your web browser, my apologies. That should be fixed now. My goal is to make that translation happen via automation so forgetting the manual task doesn’t resurface the issue.
Other Learnings
So far the production process has been almost entirely manual. The AI support has basically been:
Google Search
ChatGPT prompting to do searching
ChatGPT drafting of some of the matplotlib code, which I end up adjusting, particularly if I want PNG files created from it.
Everything else is organically grown. I like the writing, but I have found that at best half my time goes into creating intellectual content, and the rest goes into dealing with the machinery of publishing and making it all work visually. As the material shifts from economics and opinion more towards math and code with supporting visualizations, the split is closer to 25–35% on content for whatever the key concept was about. For example, the animation for the Markov Chains with NetworkX and PyDTMC article involved a 14-layer Draw.io diagram and animated GIF conversion that took about 16 hours before it all worked as I had imagined it, and even then I had to create a static slice for how Substack presents the article on the home page or in a note. Then layer on top of that converting the article to Medium, creating a Substack note, and creating a LinkedIn post announcing the article.
My objectives are to get more frequent content out, improve the quality of visualizations, and market the newsletter better—the last of which is getting very little attention. The solution? Automate more of the process.
The Experiment
For a while I've been a believer that Small Language Models (SLMs) are an under-appreciated resource in the GenAI space. LLMs have their uses, but the industry forces around them have issues that don't necessarily align with better customer outcomes or environmental consequences. I was glad to see NVIDIA release a paper promoting the role of SLMs for Agentic systems. Orchestrated assemblies of SLMs make more sense to me as a way to balance stochastic and deterministic processes with lower power demands and better CapEx/OpEx efficiency.
To that end, I've started an MVP within ChatGPT that I hope to migrate later to a collection of special-purpose SLMs. The structure is an orchestration template with multiple defined roles, where some tasks only happen when I trigger them. Key roles include:
Mathematician: retrieves formulae and references (LaTeX ready).
Principal Software Engineer: drafts code based on narrow criteria.
Technical Visualization & UX Expert: suggests visuals and coordinates implementation.
Author (me): sole prose source and final decision maker.
Editor: finds minor text flaws without rewriting.
Systems Automator: observes workflow and produces summaries; responds to trigger commands (e.g., reference formatting).
Several other roles exist, but these give the gist. This approach is already forcing me to think through organizing responsibilities and interactions among SLMs. As an MVP budget, the only cost is a $20/month ChatGPT Plus subscription.
This article itself is the first, albeit very simple, output from using the template.
The Editor didn’t get too crazy. ☺
I realized during the process that I need to give the template a lot more information about constructing visualizations, so that isn’t a factor yet.
Inter-article linking support isn’t in place yet, that was manual, but the reference retrieval and formatting (below) it handled just fine.
It doesn’t yet know anything about porting the article to another platform, but we’ll get there.
References
Belcak, P., Heinrich, G., Diao, S., Fu, Y., Dong, X., Muralidharan, S., Lin, Y. C., & Molchanov, P. (2025). Small language models are the future of agentic AI. arXiv preprint arXiv:2506.02153. https://doi.org/10.48550/arXiv.2506.02153
The Experimentalist : Newsletter Status 2025-09-01 © 2025 by Reid M. Pinchback is licensed under CC BY-SA 4.0