Thursday, March 20, 2025

Running Ollama on the Intel Arc GPU


How I Got Ollama Running on My Intel Arc GPU – A Step-by-Step Journey

Hey everyone, Daniel here from Tiger Triangle Technologies! I just uploaded a new YouTube video that I’m really excited about, and I wanted to share the experience with you in this blog post. If you’ve ever tried running Ollama on an Intel Arc graphics card and hit a wall because it seems to favor Nvidia or newer AMD cards, this one’s for you. I’ve got some great news: there’s a solution called IPEX-LLM that saved the day for me, and I’m going to walk you through how I made it work.

Saturday, March 8, 2025

Testing the Worldviews of 7 Popular AI Chatbots


 

Unraveling the Minds of AI: A Journey Through Their Worldviews

Hey everyone! I just dropped a new video on my YouTube channel, Tiger Triangle Technologies, and I’m beyond excited to share the behind-the-scenes scoop with you here. This project has been a wild ride—think of it as peeling back the digital curtain, the layers within the neural networks, of some of today’s top AI chatbots. What do they really think about life’s big questions? You might be surprised with what I uncovered. We will also talk about why it matters, and why I’m already itching to do this again.

Wednesday, March 5, 2025

Grokomatic - Automating Social Media Posts using Generative AI

What is Grokomatic?

Grokomatic is an experimental console program I created using .NET and C#. It generates social media content based on information in a JSON file. In my example this contains a list of technical innovations. It will randomly pick an item from the list, generate the related text and image using AI, and post it to Facebook, Instagram, and X (formally Twitter).

In this series, we'll walk through the source code which I have posted on GitHub.

Tuesday, November 12, 2024

The Case of the Slow Chatbot - and how to make it lightning fast

 



It was a brisk afternoon at Tiger Triangle Technologies, and the hum of computations filled the air, more eager than a bustling London thoroughfare. I had scarcely finished my previous experiment—indeed, quite a robust affair with Blazor web applications—when a most peculiar challenge presented itself: the elusive matter of optimizing performance for the Ollama software suite.

Seated at my desk, surrounded by stacks of benchmark records, diagnostics, and freshly sharpened code, I felt a thrill much akin to that of a detective on the brink of discovery. “Ah, here we are,” I muttered. “Today, we’re diving deep into the mysteries of local model performance.”

The challenge was one fit for a connoisseur of computational intrigues, a true fundamental, if you will, in the realm of Ollama’s capabilities. For those as intent on maximizing Ollama’s prowess as I was, today’s expedition would surely reveal insights worth their weight in bytes.