It’s the week of November 6th, so that means it’s time to check out this week’s top five repos. This week, we have a set of new entries ranging from education-focused repos for generative AI, to an open-source payment processor. So take a look and see what made the list!
The first to take the top spot is this 12-lesson comprehensive course by Microsoft. The goal of this repo is to teach users the fundamentals of building Generative AI applications. Each lesson covers a key aspect of Generative AI principles and application development. Throughout the course, we will be building our own Generative AI startup so you can get an understanding of what it takes to launch your ideas.
Are you interested in running your own large language model in a local environment? Well, then OLlama has you covered. This repo promises to give you the tools to “Get up and running with Llama 2 and other large language models locally”, which is a pretty bold claim.
Hyperswitch is a community-led, open payments switch, written in Rust, that enables businesses to access the best payment infrastructure for their needs. It does this by providing a single point of integration for multiple payment processors, reducing the need for businesses to maintain separate integrations with each processor.
tailspin works by reading through a log file line by line. It then runs a series of regular expressions (regexes) against each line. Regexes are patterns that can be used to match text. In this case, the regexes are used to match patterns like dates, numbers, severity keywords, and more. Tailspin does not make any assumptions about the format or position of the items it wants to highlight. This means that it can be used with any type of log file, regardless of the format. It also means that it does not require any configuration or setup.
The Yi series models are large language models that were trained from scratch by developers at 01.AI. The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of these models were trained with a sequence length of 4K, but can be extended to 32K during inference time. This means that they can be used to generate longer sequences of text, such as paragraphs or even essays. The Yi series models are still under development, but they have already been shown to be capable of generating high-quality text in both English and Chinese.
What a great entry for November! Not only did we get our first educational-focused repo, but we’re seeing first hand where users are going as time goes on. It will be interesting how next week shapes up. We may see some familiar repos, or a completely new set of them. Either way, ODSC will be here to share what’s causing waves in GitHub.