New Year, New AI Product Report!

We're back for more discovery together

For those that are new around here, this is a newsletter where I highlight new and innovative AI products that are worth exploring & cover some news highlights of interest.

Hi there!

Happy 2024 🎆, and happy Friday!

Welcome back to our corner of the internet where we dive into interesting feats of commercially available artificial intelligence. Grab your virtual coffee (or beverage of choice) because this issue is a big one!

On the docket for today is:


That’s right, we’re growing and making new friends!

Bytesized Fam, meet The AI Product Report crew, and yes, you’re getting bunkbeds. There’ll be room for activites 😉 (Shout out to the resident “Step Brothers” fans)

We’re greeting a lot of new faces to the newsletter in 2024. The fam at willl be joining us moving forward! They’re moving in with us, so you don’t have to lift a finger, so let’s give them a warm welcome!

It’s big news, because this is going to get us bigger, get us talking, and keep everyone in the know! The idea with the merger is to cover more AI Products, industry shifts, market moves, and to round out our publication to get you more value per issue!

With our growing community, you can expect that there will be new channels that will open this year for us to exchange on!


After testing dozens of new AI products this week. Here’s my top pick:

This week, leading the pack is Robin AI: a lawyer's best friend (in my case, an entrepreneuring product manager who has to deal semi-regularly with partnerships). This AI-powered product integrates cleanly with many firms' bread and butter - Microsoft Word. Robin AI's assistant sits right next to your work, and methodically pecks at your contracts, section-by-section. If a bird on your shoulder isn't your cup of tea, its functionalities are also available through file uploads in the Robin AI web app.

Zoom in here a bit, there are some pretty feature rich options in here!

Of course this isn't a product that'd beak into your contracts and leave you hanging with holes scattered throughout (See what I did there?). This app proposes changes according to a "personality" profile you can pick. Live swaps between personalities are seamless, and Robin AI's opinion is, in my opinion, going well beyond ChatGPT-4's contract review comments. I think this product has quite a bit of future helping accelerate legal work everywhere - even as a crutch for non-lawyers looking to up the quality of their handoffs to legal teams.

From an AI development standpoint, embedding significant domain expertise into a system like this one is no easy task, so my hat's off to Robin AI's British creators who are fresh off a 26 million dollar Series B raise. Remains to see if they’ll consider branching into supporting more languages, or might look into expanding their range of supported documents to include Terms & Conditions, Service Line Agreements, etc…

Robin AI has a free trial you can find on their website:


What a way to end off 2023, and to start 2024! Quite a bit that has moved in the major players of industry, regulatory bodies, and in research. Here are your highlights:

Industry Updates

Explore GPTs Landing page sporting over 30,000 variants ready for use

  • OpenAI’s December debacle: Yes, I’m intentionally sidelining the drama around the firing and rehiring of Sam Altman … I feel like enough has been said about that. Otherwise, your updates are one Google search away 😛 

AI’s Regulatory Changes & Industry Standards

  • The European Union Joins the Fray: Regulatory shifts are coming from a major market that will definitely impact growth moving forward. The EU announced a set of provisional regulations that are notably in process. (more details here) topics of contention are the use of biometrics in/around/and for AI models, developer transparency requirements, and copyright law.

    • This comes on the tail of the USA announcing a presidential action in October (1) Which called for tooling, and frameworks of evaluation of AI systems in different facets)

  • NIST is helping promote responsible AI related risk-management: The US National Institute of Standards and Technology has tried their hand at mapping key touchpoints of trustworthiness along the typical stages of AI development cycles. For those who have sat in on a few engineering sessions with AI developers, you’ll find yourself right at home with this

An AIPR side note on all things surrounding ethics of AI training data: There's a lot of buzz across the world about the ethical issues tied to using all kinds of works from creators for training AI models, especially when it comes to copyrights. If anything concrete pops up in this space, you bet I'll jump right in with some mentions. But until regulators of major markets step in and really lay down the law, I reckon other news sources have got you covered. That being said, I won't plan on diving deep into any individual single case that comes up across headlines.

Discoveries Fresh from the Lab

  • The Linux Foundation is getting us Generative AI Commons? The Linux Foundation is enhancing its open-source vision models by incorporating generative AI into its Commons, a move that broadens access and collaboration in AI development. Check it out

  • Turns out we can stress LLMs? Research fresh out of Apollo Research is showing that injecting time sensitivity into prompts can really impact the results produced by LLMs. (in this instance Chat GPT4). Reading a bit further on the experiment highlights that in fact, stressing the model was a demonstration that performance under pressure can sway results, and goes on to specify that “Our research is thus more of an existence proof that such behavior can occur, and not indicative of how likely it is to occur in the wild." Check it out

  • OECD has set up an observatory on many AI-related things! They’re tracking data on AI-labor force, principles & policy recaps, Safety & incidents monitoring, and more! If that’s your flavor, go have  a look:


What I’m reading → Ideas I toyed with 

Yes, I know this next one is from TechCrunch, but hear me out: I think the author, Ron Miller, brings a pretty valid point to the proverbial table. He’s arguing for the survival of task-focused AI.

With how much LLMs have taken the spotlight (even in this publication sometimes😬) one can start to wonder pretty quickly which other forms of AI might be left in the dust. Miller argues that Task-focused AI Models aren’t obsolete, if anything, they’re still very relevant for their use cases, though some who are particularly hazy in their value proposition are being eclipsed by LLMs.
If you’ll indulge me to expand on this a bit, I’d note that for complex tasks requiring breadth of knowledge and depth of particular inputs, a hybrid model for advanced solutioning could (potentially) be an interesting way of the future. E.g.: An LLM (like ChatGPT or Google’s Bard) interacts with humans, and acts as a general-purpose delegator, while the focused task-focused model is wrapped in a microservice and trades information with its delegator, the LLM…

Closing thoughts

Whew, that was a lot for a first publication since taking over. I hope this one is living up to your expectations as I get my feet under me for publishing these.

What did you think of this issue? Have ideas, suggestions for the newsletter moving forward? 

Reach out by hitting “Reply” if you’re feeling shy, or leave a comment if you’re feeling brave. 😉

I’m dedicated to building a newsletter that works for you, my wonderful community.

Until next time, stay well and stay curious!

-Sam ✌🏽

Join the conversation

or to participate.