Author: jackadmin

  • LLMs – How To Choose

    AI and Machine Learning. An LLM is a “Large Language Model” that is trained on a task with data and its internal structure. The language model of today is focused on learning how to generate text in the different languages of the world. A broad number of AI companies are developing these “generative chatbots” that allow a user to engage with a human-ish software service to solve complex problems. These are chatbots like ChatGPT, Gemini, DeepSeek, and Co-Pilot.

    AI and Machine Learning algorithms today are good at retrieving information, keeping a backlog of previous conversational information, presenting complex topics across the World Wide Internet, and at learning how to generate text that is more and more user-specific. All of this comes with a heavy dose of training and origination bias. The creators of the algorithms and the models today create inherent bias in their AI products.

    The GPT in ChatGPT, LocalGPT, or PrivateGPT comes from the LLMs that are “generative pretrained transformers”. Which to me sounds like these are a database full of training vertices and the products of common mathematical functions that can aid in the natural language processing task. These transformer-based models emerged in 2017.

    The LLMs that we independent software developers used are the BLOOM and LLaMA. Other Free LLMs are Llama 2, Gemma, Falcon, Mistral 8x7B, and Phi-3.

    Paid and subscription LLMs are for company software developers and those seeking to create code from their AI prompts. Some of the paid LLMs include GPT-4, Gemini Ultra, Claude 3, and Grok.

    When considering the LLM you need, look at these abilities:

    • Reasoning
    • Coding
    • Creating Video/Audio/Images
    • Mathematics ability
    • Efficiency
    • Analysis
    • Conversational Ability

    When considering the drawbacks of an LLM, look at these limitations:

    • Resource-constraints of your hardware/cloud subscription
    • Bias in the pre-trained data
    • Number of parameters used in pre-trained data
    • Constraints and deployability of the model

    Sources: Choosing the Right LLM for You: A Guide for Powerful Gen AI

  • Making Apps In The Cloud VS On Premise

    The development process of making apps in the cloud versus on premise is pretty much the same, although they have different dependencies. And you know what? It’s actually pretty different, because in a cloud environment you have a certain set of assumptions made that differ quite a bit from the assumptions made during on premise development.

    With Cloud environments like Amazon Web Services you have a lot of different, small tools at your disposal. Meaning, if you just need a microservice up, they have something called Amazon Lambda that can run your code upon request, and tie the service into an Amazon API Gateway and you officially have a very solid API web service! That is part of the ease of a cloud environment.

    You can also simulate Cloud environments on your own computer. This is a must for any type of professional development. As you become more experienced writing microservices for the cloud, you will start to see how much time can be eaten up waiting for either your deployment server, or waiting for yourself to click through the menus to upload a new version. If you have the need to test new versions ASAP, download something like Amazons AWS Serverless Application Model. I cover this in another blog post here.

    With on-premise development you have your traditional server, it sits headless in a locked back room somewhere, maintained by a couple of guys telecom and system administration. They update it you hope. And they keep it online when it rarely happens to get knocked offline. That usually requires a report made by a person or made by a machine to tell people that the server is offline.

    Automating the on-premise infrastructure is a bit harder, because you have a lot more opportunities for entropy and customizations, whereas a cloud provider will follow an API development model, which can be easier for some tasks. Right now if you are automating the creation of on-premise devices you maybe have something like a Kubernetes cluster set up with nodes that can be responded to on demand. If you have physical servers maybe you have a program such as puppet installed on one master server and all servant nodes responding to changes instantiated from the master. Or perhaps you automate with Chef. And you are writing scripts in Ruby to manage computers software dependencies with that. With both cloud and on-premise app development you have lots of options for automation. At some times one may be better than the other. It is up to you, the user, the developer, to create the value with those tools. In my opinion, with my skills, I would prefer to do on-premise automated infrastructure deployment. But I am adding in some other considerations, regarding security and how much time I have to spend negotiating with the issues that arise from automating on-premise devices. Plus I think on-premise saves me money.

  • Creating a Reverse Proxy in HTTPD or Apache2

    I called it HTTPD and / or Apache2 because I have been using CentOS more lately and need to distinguish that for search engines.

    To configure a reverse proxy in httpd you need the following code:

    <VirtualHost *:80>
        ServerName api.somesite.com
        Redirect / https://api.somesite.com/
    </VirtualHost>
    
    <VirtualHost *:443>
        ServerName api.somesite.com
        SSLEngine on
        SSLCertificateFile /path/to/cert.pem
        SSLCertificateKeyFile /path/to/key.pem
        ErrorLog /path/to/logs/publicaname.example.com-ssl-error.log
        CustomLog /path/to/logs/publicaname.example.com-ssl.log combined
    
        ProxyPass / http://127.0.0.1:8000/
        ProxyPassReverse / http://127.0.0.1:8000/
    </VirtualHost>

    Afterwards you will most likely have to open a port in your firewall. To do this you have to do the following:

    sudo firewall-cmd --zone=public --add-port=8000/tcp

    Even after that you may run into the issue where SELinux is not allowing outside communication with that port. To allow bind with SELinux use the following command:

    /usr/sbin/setsebool -P httpd_can_network_connect 1

    While the above may not be a surgical fix for this problem, it does solve the problem right away. I have included it in my sources at the bottom in case I need to refer back to it.

    Sources::

  • Nvidia Broadcast

    My friend introduced me to this Nvidia software that can do a virtual greenscreen as your background and eliminate keyclicks during recordings. He demonstrated it live for me in Discord. It was pretty amazing not being able to hear him pound on his keyboard while he talked at the same time. He wasn’t sure if it would work for my system since he has a 2070, a RTX video card, and I have a 1070, a GTX card.

    I did a little more research on it today and found out the technology for keyclicks used to be called “RTX Voice” which led me to this link location. Unfortunately, it seems to only work for RTX cards. I remember in my research that the Nvidia team did say they had patched in the support for GTX cards, but I was unable to find a download after that. It’s not super important to me because I have some different ideas that I’ll get around to which could fix this, mainly by using a microphone arm or stand, and I think securing a better headset/microphone combo isn’t outside the realm of possibility. So I will keep this on the backburner.

    In other broadcast news, I have restarted my streaming channel, and I began streaming Escape From Tarkov again. Lately, I have tried out that new game Phasmophobia, the ghost-hunting game, and that was a lot of fun. I hope both these games come out of early-access and get more fleshed out soon. I hadn’t played much games up until the past weekend, and when I did I was feeling really refreshed and excited for some of the possibilities in those games. I hope to continue to have the free time to explore more and stream more. Some of my side projects are wrapping up and I hope to have them on this website here sometime soon. Until next time…

  • Welcome Again!

    This is a WordPress blog. It used to be based around one project, an internet radio station, but now it is about all my projects, and sometimes will be about internet radio stations!

    I love music! If you are a musician please stop worrying about anything else and go make some more music!

  • Why You Should Use Free Software

    FOSS, or Free and Open Source Software, is a software phenomenon where software developers freely distribute their source code alongside the software they produce. This used to be almost a defacto standard of how things were done in the software and hardware world of the 60s and 70s. You would buy a new piece of equipment, and the vendors would also supply a user manual alongside the source code so that you could make any necessary tweaks you needed in order to get it working in your environment. Sometimes there would be updates to other components that you already had. This would require a tweak in the files like these. And since the early programmers were probably a good majority of the users of such software, they had the capability to actually make said changes.

    The software that I had in mind is actually Open Office. I switched to using this after being fed up with licenses and yearly fees from Microsoft. It has been a great addition to my computers, and I mainly use the “Writer” and “Calc” programs. Those are replacements for Microsoft Word and Excel, respectively. The programs look good, and they run fine, although I don’t have much experience with using super large files in them. I remember back in 2009 I tried using OpenOffice on my Ubuntu computer at the time. I like it at first, but then came the crashing. And when you’re working on a Writer document you sure as hell didn’t want to lose all that data.

    They must have patched out those bugs, or at least, they have more data recovery options by now, because if I ever force-restart my computer Open Office always comes back with a recoverable option for my document. And that solves that reliability problem that I had. As for everything else, the software suite is feature-dense, and has everything that you need to make a document look as good as it did when you made it in Microsoft Office.

    Now, there are some benefits to using free software. You don’t have to pay. You don’t have to be hounded by a license checker. This free software can output the same format as word documents. And you don’t have to subject yourself to any data-collecting or telemetry that your vendor might impose upon you. This is a big reason why I use this software instead of Google Docs. I used to have a lot of different Google Docs in drive, but that was from 2008-2015ish. Now I try not to put anything in there. Whatever you type into these big companies could be data-mined to better-target those ads for you. And we don’t want that. So that is why you should use Open Source (Free) Software.

  • How to Show How Many Days are left in SSL Certificate from Command Line

    echo | openssl s_client -connect <your server here>:443 2>/dev/null | openssl x509 -noout -dates Taken from letsencrypt community answer:

  • Is Google Going To Sunset their Google Play Music App this Decade?

    I have been wondering this. I wonder whether or not Google is going to sunset or discontinue service of the music on their Google Play Music app because I haven’t seen any significant changes in what seems like 2-3 years. Last change I can remember is them fiddling with the context menu (I’m talking about…

  • How to Check If Raspberry Pi Camera is Working

    I got the answer from this site: https://www.raspberrypi.org/forums/viewtopic.php?t=46113

    And the answer is:
    vcgencmd get_camera
    When the camera is connected, the output is like this:
    supported=1 detected=1
    Credit for this answer goes to user znanev. Hats off to you sir.

  • Configuring headless raspberry pis

    Yeah. Needed to do this for some testing, some websites needed hosting, some microservices needed running, and some other inane mundane stuff had to be done with these pi zeros I had. Now that they are set up, they are feeding me info and stats about different things like power usage, failed code builds, lagging todos, business services statuses, upcoming appointments, and general healthful reminders. It’s part of my plan to build a smart house when I’m older. And it’s part of a plan to help me open my various businesses from ideas I got. Anywho…hhere we go!

    I had wanted to configure these pis headlessly, so I gave it a quick search and found this article regarding the same. It’s at losant and written by a fellow named Taron.
    Here is the link to said article.

    I am going to distill the important bits for myself and post them here for me and you. I will definitely use this in the future and have wanted to go to my own post twice now but it’s just been this link out. Well this time I am finally writing it. Here we go!
    I am assuming here that you already have installed all that shit to your MicroSD card, and have a partition called boot to traverse.

    1. In boot, create a file called ssh. This is all you need to enable ssh
    2. In boot, create a file called wpa_supplicant.conf
    • This supplicant file controls all you need to automatically connect to WiFi
    • Also, add this shit:

    • country=US
      ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
      update_config=1
      network={
      ssid="WIFI_SSID"
      scan_ssid=1
      psk="WIFI_PASSWORD"
      key_mgmt=WPA-PSK
      }