Catch Up

Below, you’ll find my latest posts—scroll through and explore anything that catches your interest!

  • ,

    The Neural Network Building Block: It Really Is This Simple

    5–8 minutes

    In this post I describe a few mathematical concepts that are essential to neural networks. Take a read if you want to be introduced to matrices, weights and biases and activation function.


  • How Large Language Models Work (Without the Jargon)

    13–20 minutes

    In this post I walk you through the large language model, highlighting some of the key concepts and architectures. This is a lightweight explanation – no maths, so don’t be scared and read on! I also highlight some of the challenges that LLMs are facing, including hallucinations, memorisation and jailbreaking.


  • ,

    The Truth About Working in AI

    7–10 minutes

    In this post I reflect on years in AI research, highlighting both challenges and rewards. While the work is fast-paced and technically demanding, it offers opportunities for creativity and travel. The field is diverse, with various roles requiring different skills. Take a read for some insight into the life of an AI scientist.


  • The Future of Emotional Intelligence in an AI-Driven World

    7–11 minutes

    The post explores the increasing role of AI in everyday life and its potential impact on human emotional intelligence and connections. As people rely more on AI for interactions, advice, and information, there is concern about diminishing genuine human relationships. The content warns that excessive reliance on AI could lead to emotional disconnection and unhealthy…


The Primer Series

The Primer Series is your go-to guide for understanding AI and its key concepts! You’ll find clear definitions, real-world examples, and links to deeper resources. I suggest grabbing a cup of tea and diving in whenever you’re in the mood to learn something new!

  • In this post I describe a few mathematical concepts that are essential to neural networks. Take a read if you want to be introduced to matrices, weights and biases and activation function.

  • In this post I walk you through the large language model, highlighting some of the key concepts and architectures. This is a lightweight explanation – no maths, so don’t be scared and read on! I also highlight some of the challenges that LLMs are facing, including hallucinations, memorisation and jailbreaking.

  • GPUs are crucial hardware for AI, enabling fast, parallel computation necessary for processing large data sets. This post explains the components of computers, specifically highlighting the GPU’s evolution from graphics processing to its role in AI. Despite higher manufacturing complexity and increasing demand, GPUs are essential for advancing AI technology.

  • AI Agents are set to be a major focus in 2025, evolving from previous advancements in LLMs. These agents operate autonomously, sensing their environment and taking actions without human supervision. They vary in complexity across four categories: Simple Rules-Based, Model-Based Reflex, Goal-Based, and Utility-Based Agents, each offering different capabilities and applications.

Search the blog