<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Neural Networks &#8211; Megaputer Intelligence</title>
	<atom:link href="https://www.megaputer.com/tag/neural-networks/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.megaputer.com</link>
	<description>Your Knowledge Partner</description>
	<lastBuildDate>Tue, 24 Mar 2026 00:02:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.0.22</generator>

 
	<item>
		<title>Choosing Machine Learning Models</title>
		<link>https://www.megaputer.com/choosing-machine-learning-models/</link>
		<pubDate>Mon, 21 Oct 2019 21:27:31 +0000</pubDate>
		<dc:creator><![CDATA[Chris Farris]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Neural Networks]]></category>

		<guid isPermaLink="false">https://www.megaputer.com/?p=32435</guid>
		<description><![CDATA[<p>When setting out to use machine learning to create your own models, a question you may be asking yourself is: Which model framework do I choose? While there is rarely a definitive or clear answer to this question, let’s discuss some things to consider when making this choice.</p>
<p>The post <a rel="nofollow" href="https://www.megaputer.com/choosing-machine-learning-models/">Choosing Machine Learning Models</a> appeared first on <a rel="nofollow" href="https://www.megaputer.com">Megaputer Intelligence</a>.</p>
]]></description>
				<content:encoded><![CDATA[<section class="l-section wpb_row height_small"><div class="l-section-h i-cf"><div class="g-cols vc_row type_default valign_top"><div class="vc_col-sm-12 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>As we have discussed previously, machine learning approaches to modeling are just that – approaches. There are many forms of models we could use with machine learning, each with different design philosophies and quirks. When setting out to use machine learning to create your own models, a question you may be asking yourself is: Which model framework do I choose? From Neural Networks, to Support Vector Machines, to Decision Trees, to Linear Regression, there are many options. While there is rarely a definitive or clear answer to this question, let’s discuss some things to consider when making this choice.</p>
<h2>Model Properties</h2>

		</div>
	</div>
<div class="g-cols wpb_row type_default valign_top vc_inner "><div class="vc_col-sm-8 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>Model frameworks have a set of qualitative measures attached to them which, while difficult to directly measure, can be useful for thinking of. We will consider two qualities: capacity and complexity. Capacity is the measure of a model framework’s ability to describe data distributions; it can be thought of as potential accuracy. For example, imagine we scatter pebbles on the ground and would like to model the shape they form with two different models. One model is a rigid stick (depicted by the blue line) and the other is an elastic band (depicted by the red line).</p>

		</div>
	</div>
</div></div></div><div class="vc_col-sm-4 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="w-image align_center"><div class="w-image-h"><img width="500" height="385" src="https://www.megaputer.com/wp-content/uploads/a-mars-regression-function-with-three-knots-marked-by-e1572363100397.jpg" class="attachment-shop_single size-shop_single" alt="MARS - Linear vs Elastic" /></div></div><div class="ult-spacer spacer-69ef91a173999" data-id="69ef91a173999" data-height="10" data-height-mobile="30" data-height-tab="20" data-height-tab-portrait="20" data-height-mobile-landscape="10" style="clear:both;display:block;"></div></div></div></div></div><div class="g-cols wpb_row type_default valign_top vc_inner "><div class="vc_col-sm-12 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>The stick has very low capacity because it can only model one shape while changing the angle that it lies on the ground. A few pebbles may fit into this model, but it is unlikely to be a good choice for many situations. The elastic band, while probably not being perfect, can twist and bend to create a host of complex curves to fit our data. This model can fit more potential data distributions and has a higher capacity.</p>
<p>Capacity is a very important property of models. If our model’s capacity is too low, we may never be able to adequately make predictions no matter how much training data we use or how fancy our hardware is. <a href="https://www.megaputer.com/2019-news-neural-networks/" target="_blank" rel="noopener">Models with high capacity include Neural Networks</a> and Support Vector Machines (SVMs), and this is why they have been so popular.</p>
<p>However, capacity is not the only property to consider. Another quality is complexity, which can also be thought of as interpretability or how easy it is for a human to understand. In our previous example, the stick had low complexity (or high interpretability). It can be described and understood well by humans, and it is highly generalizable. Low complexity models include Linear Regression and Decision Trees. Conversely, the many curves created by the elastic band are very complex and difficult for humans to describe. SVMs and Neural Networks would, therefore, be considered high complexity models.</p>
<p>This relationship between complexity and capacity that we saw in the above example is generally true for all of our models. By increasing our capacity, we often must incur the cost of increased complexity. This is part of the analysis you must consider when choosing a model. Do you need to be able to describe or personally understand what the model is doing? If so, going all in for the most complex but accurate model may not be desirable. This is why many hospitals and healthcare-related institutions often rely on less complex model structures like Random Forests. When taking the health of patients into account, <a href="https://www.megaputer.com/solutions/healthcare/#utilization" target="_blank" rel="noopener">doctors want to be able to understand what a model is doing before taking action</a> on those results.</p>
<h2>Resources</h2>
<p>Thus far we have discussed the theoretical constraints of a trained model. Now we should consider more practical properties. A machine learning model requires training, which in turn requires data, processing power, and time. When selecting a machine learning model to use, we need to weigh how much of these resources we have available to invest in the model. Additionally, the quantification of the model itself is important. For example, if we plan to deploy the model on mobile technology, we should be cautious about how much storage the model itself requires. As expected, if we desire models with high capacity, we usually need to invest more resources into the model during training and storage. Let’s consider a sample of models and discuss the resources they require.</p>

		</div>
	</div>
<div class="w-image align_center style_shadow-1"><div class="w-image-h"><img width="1024" height="400" src="https://www.megaputer.com/wp-content/uploads/machine-learning-world-2.png" class="attachment-full size-full" alt="machine learning world" srcset="https://www.megaputer.com/wp-content/uploads/machine-learning-world-2.png 1024w, https://www.megaputer.com/wp-content/uploads/machine-learning-world-2-300x117.png 300w, https://www.megaputer.com/wp-content/uploads/machine-learning-world-2-768x300.png 768w, https://www.megaputer.com/wp-content/uploads/machine-learning-world-2-600x234.png 600w" sizes="(max-width: 1024px) 100vw, 1024px" /></div></div><div class="ult-spacer spacer-69ef91a17512f" data-id="69ef91a17512f" data-height="30" data-height-mobile="25" data-height-tab="25" data-height-tab-portrait="25" data-height-mobile-landscape="25" style="clear:both;display:block;"></div>
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>On the low-cost end of the spectrum are Naïve Bayes models. These models are generally lower in Capacity but are extremely easy to interpret and train. One of the main advantages of Naïve Bayes models is that they are exceedingly fast to train and store. Unfortunately, the model relies on the Naïve Bayes assumption which is rarely true for our data. However, if your data does not stray too far from that assumption, you can find that the model provides decent results in a cheap package.</p>
<p>Also relatively low-cost are Decision Trees and Random Forests. These models are interpretable and the tree structure of the models make them easy to store efficiently if you don’t have an enormous number of features. Unfortunately, we start to see an increase in the time required to train for these models.</p>
<p>SVMs, a popular technique, come with a large jump in resources required. Training these usually requires more data than other methods and the resulting models are bulky, almost impossible to interpret into plain English, and tricky to train in the first place because of the choice of kernel. However, SVM’s are powerful models when trained correctly, and thus they are widely used.</p>
<p>Possibly the most costly of the models are Neural Networks. Famed for their high capacity, these models require extensive processing and time to train and can take up a large chunk of storage to contain. But it isn’t all doom and gloom about the resources required for Neural Networks. Because of how they are trained, Neural Networks can take additional data to finetune the training at any point, meaning that we don’t need to retrain our models every time we get new data.</p>
<h2>Which Model to Pick?</h2>
<p>Ultimately, the answer is not obvious in every case. When choosing which machine learning model to use, we need to consider many different factors: What tradeoff do we want to make between capacity and complexity? What resources do we have available for our training? Do we want to be able to store the model in a compact manner? Do we want to be able to continually use new data without having to retrain the model entirely each time? Finally, we don’t always make the right choice. Sometimes we have to <a href="https://www.megaputer.com/solutions/predictive-analytics/#tools" target="_blank" rel="noopener">experiment with multiple structures to find one that fits our needs</a>. If you can, train multiple models to help decide which one to devote your attention to.</p>

		</div>
	</div>
</div></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://www.megaputer.com/choosing-machine-learning-models/">Choosing Machine Learning Models</a> appeared first on <a rel="nofollow" href="https://www.megaputer.com">Megaputer Intelligence</a>.</p>
]]></content:encoded>
			</item>
		<item>
		<title>What&#8217;s the News with Neural Networks?</title>
		<link>https://www.megaputer.com/2019-news-neural-networks/</link>
		<pubDate>Tue, 28 May 2019 18:10:03 +0000</pubDate>
		<dc:creator><![CDATA[Chris Farris]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Data Analytics]]></category>
		<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Neural Networks]]></category>

		<guid isPermaLink="false">https://www.megaputer.com/?p=31864</guid>
		<description><![CDATA[<p>There are many reasons for the explosion of machine learning advancements over the past decade. We now have vastly improved hardware for fast computation, and memory is cheaper than ever. Individually, these advancements are already a blessing for the technology-space. But for AI, they have opened the gates for something truly powerful—Neural Networks.</p>
<p>The post <a rel="nofollow" href="https://www.megaputer.com/2019-news-neural-networks/">What&#8217;s the News with Neural Networks?</a> appeared first on <a rel="nofollow" href="https://www.megaputer.com">Megaputer Intelligence</a>.</p>
]]></description>
				<content:encoded><![CDATA[<section class="l-section wpb_row height_small"><div class="l-section-h i-cf"><div class="g-cols vc_row type_default valign_top"><div class="vc_col-sm-12 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="g-cols wpb_row type_default valign_top vc_inner "><div class="vc_col-sm-6 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>There are many reasons for the explosion of machine learning advancements over the past decade. We now have vastly improved hardware for fast computation, and memory is cheaper than ever. Data is now “Big Data,” and it is both jealously hoarded and publicly available in repositories such as ImageNet. Individually, these advancements are already a blessing for the technology-space. But for artificial intelligence (AI), they have opened the gates for something truly powerful—Neural Networks.</p>

		</div>
	</div>
</div></div></div><div class="vc_col-sm-6 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="w-image align_none"><div class="w-image-h"><img width="1024" height="611" src="https://www.megaputer.com/wp-content/uploads/shutterstock_592921421-1024x611.jpg" class="attachment-large size-large" alt="" srcset="https://www.megaputer.com/wp-content/uploads/shutterstock_592921421-1024x611.jpg 1024w, https://www.megaputer.com/wp-content/uploads/shutterstock_592921421-300x179.jpg 300w, https://www.megaputer.com/wp-content/uploads/shutterstock_592921421-768x459.jpg 768w, https://www.megaputer.com/wp-content/uploads/shutterstock_592921421-600x358.jpg 600w" sizes="(max-width: 1024px) 100vw, 1024px" /></div></div></div></div></div></div>
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<h2>Neural Whatnow?</h2>
<p>Neural Networks. You’ve probably heard of them. They are at the forefront of <a href="https://www.megaputer.com/an-introduction-to-machine-learning/" target="_blank" rel="noopener">the machine learning craze</a> and are the driver of many of the most impressive advancements. The technology that led machines to the best humans in <em>Go</em> and the popular video game <em>Starcraft?</em> Neural Networks. The backbone of algorithms that can recognize images and faces, which are igniting a surveillance and privacy panic? Neural Networks.</p>
<p>And yet, Neural Networks aren’t some new idea spawned from the incubator of a giant tech company. They aren’t some stroke of genius from a college student turned dropout who went on to found a revolutionary tech firm. Neural Networks are in fact… old hat. Or they were.</p>
<p>The concept of a Neural Network (something we will get to later) has been around for years. They date back to the 1970s, and simpler versions of them existed even in the 1940s! So, if they have existed for decades, why are they only popular now?</p>
<p>The answer is related to the hardware and data advancements mentioned earlier. Neural Networks crunch a lot of numbers. They also need a lot of data to help them learn. Up until the past decade, this made training anything but the simplest networks highly time-consuming and expensive.</p>
<p>With all the great improvements to hardware over the years, the possibility of using more advanced Neural Networks became possible. Aided by hardware demands from the entertainment industry and now cryptominers, GPUs (graphical processing units) have been developed which can calculate specific mathematical operations at lightning speed. Luckily, these same kinds of operations occur in training neural networks. Using the technology originally developed for beautiful visuals in film and video games helps us train networks at a fraction of the time it takes a traditional CPU.</p>
<p><strong><img class="aligncenter wp-image-32241" src="https://www.megaputer.com/wp-content/uploads/neural-network-diagram.png" alt="neural-network-diagram" width="571" height="464" /></strong></p>
<h2>What’s in a Name?</h2>
<p>The power of Neural Networks may be evident, but at this point, one may also be wondering what they are in the first place. The name gives some clues. A Neural Network isn’t a singular object that makes decisions by itself. It is, as implied, a network of smaller objects all connected. A network of what, then? We call them Neurons. Neurons as in the cells inside our brains? Yes! Well, no. But sort of!</p>
<p>Neurons in Neural Networks can be thought of as being like the neurons in brains. They are tiny, individual units that are connected to other neurons in a large, structured network. These connections allow tiny pieces of data to flow between them. In our brains, these are electrical pulses. In the Neural Network, we send numbers between Neurons. The Neurons then take all the numbers fed to them by the Neurons they are connected to and process them. The process isn’t complicated—in fact, it is painfully trivial. After all, it is just a tiny unit—a single cell in our brain. But it then sends the processed information out to other Neurons it is connected to. Another number. Another electrical pulse. And so on. Tiny Neurons are fed tiny pieces of information, perform tiny pieces of computation on that snippet, and feed it forward to other Neurons, which do the same over and over until we eventually reach a final set of Neurons—the output of which is our final result.</p>
<p>From the collective effort of many small individual units networked together in a perfectly calibrated balance, we can achieve enormous computational power. The Whole is greater than the Sum of its parts.</p>
<p>&nbsp;</p>
<h2>Balancing Act</h2>

		</div>
	</div>
<div class="g-cols wpb_row type_default valign_top vc_inner "><div class="vc_col-sm-8 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>If that all sounded magical and farfetched, then don’t worry—it is. How exactly are we supposed to arrange these Neurons in such a perfect balance that their combined minuscule computations lead to a machine recognizing human faces? We can’t. So how do we get this to work? Well, we are talking about Machine Learning after all. And Machine Learning is how we are going to solve this. We aren’t going to calibrate the Neurons to be in balance. They are going to calibrate themselves.</p>

		</div>
	</div>
</div></div></div><div class="vc_col-sm-4 wpb_column vc_column_container has-fill"><div class="vc_column-inner  vc_custom_1559659942631"><div class="wpb_wrapper">
	<div class="wpb_text_column  vc_custom_1584549868207">
		<div class="wpb_wrapper">
			<blockquote><p>
We aren’t going to calibrate the Neurons to be in balance. They are going to calibrate themselves.
</p></blockquote>

		</div>
	</div>
</div></div></div></div><div class="g-cols wpb_row type_default valign_top vc_inner "><div class="vc_col-sm-12 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>In order to achieve this, we need training data. We need data that is labeled with the desired output we want from the machine. We can provide this data to the unconfigured Neural Network. It will process the data and likely output something nonsensical and useless. But this is fine. We can use a mathematical function called Error. This Error is just a measurement of how different our output was from our desired targets. Then, using Calculus,we can discover how much of that Error is caused by the calibration of each one of our Neurons! Using this information, we can then tune the Neurons slightly and repeat the process—feed data into the Network, observe the output, calculate the Error, and use Calculus to know how to tune the Neurons to make them more accurate. This process continues over time until we have converged to a calibrated Network. This process of using Error functions, Calculus, and tuning is what Machine Learning is.</p>
<p>&nbsp;</p>
<h2>Power at a Price</h2>
<p>Although it is conceptually simple (at least in this explanation that ignores the details), it is very resource intensive. At the moment I am fine tuning a Neural Network on my own desktop to recognize Western Art styles. Although the dataset is only a few GB, it takes almost an hour to run a single iteration of the training. It will take nearly two days to complete the 50-iteration learning schedule I planned. And even then, I may need to schedule another one if the Network still needs to learn more! And I’m not running this on some dusty machine I dragged out of the aughts. This is an almost brand-new desktop powered with a 3.7 GHz AMD Rhyzen 8-core processor with 32GB of RAM available and virtually no other load on the machine.</p>
<p>Neural Networks are expensive to train. If you want to increase performance, you could pay for an expensive GPU, but that might set you back nearly a thousand dollars. Companies and research institutions may have the funds to throw at this problem, but individuals, small companies, and small research groups may not. Luckily, they don’t need to anymore.</p>
<p>Cloud services have opened access to remote processing to provide other computing options to those desiring to train a Neural Network. Don’t have an expensive rig? No worries, just rent one remotely from Google at a fraction of the price.</p>
<p>&nbsp;</p>
<h2>Neurally Networked World</h2>
<p>Neural Networks are here and they aren’t leaving. New advancements and computing architectures are constantly being published. <a href="https://www.megaputer.com/convolutional-neural-networks-polyanalyst/" target="_blank" rel="noopener">Convolutional Networks</a> are good at processing images and Recurrent Networks can handle variable sized data, streaming data, or sequential data. Neural Networks are powerful, indeed—far more so than other solutions we have. But they don’t mimic what is really occurring in our brains. There are some deep flaws even in our most advanced networks. For instance, it is extremely easy to confuse a Neural Network. A Network designed to recognize stop signs can be fooled by a few well-placed stickers on a sign. There are deep safety concerns if these are going to be used in self-driving cars for example.</p>
<p><a href="https://www.megaputer.com/an-introduction-to-machine-learning/" target="_blank" rel="noopener">Neural Networks are deeply dependent on the data used to train them</a> (just as we discussed in a previous article). They are also dependent on how we configure their output. Most networks are designed to give a decision. For object detection, the network must return what it thinks the input image is. But what happens when we feed the network “nothing,” such as a completely blank image or fuzzy static noise? The network is forced to return something so it “sees,” say, a dog in the empty space. This is nonsense. Why would it choose one object over another in these cases where a human would just refuse to rigidly define nonsense? As another example, if we slightly alter an image by inserting some imperceptibly small random noise, we can completely trick a Neural Network. Where it before correctly thought that the image was a butterfly, it now thinks the image is a truck, while a human sees no difference in the images. This is bad, and it reflects deep issues with Neural Networks as we construct them today.</p>
<p>Neural Networks are occupying a liminal space. They are simultaneously scarily powerful and laughably simple and ignorant. They can best the human masters and yet be duped by the smallest of changes. We won’t be seeing Neural Networks achieve human-like sentience any time soon, and they aren’t ready for deployment in many other types of systems. But they are already being used in ways that should cause alarm.</p>
<p>China is already using facial recognition technology to tag members of the Uighur ethnic minority group.  Accurate voice recognition ensures individuals could potentially be tracked even if they are not near a camera by turning the phones in our pockets into monitoring devices. “Deep Fakes” are a growing type of video that can modify existing videos to map one person’s face and voice over another’s to create a fake video that could be used for blackmail or disinformation. While <em>Terminator</em> remains science-fantasy, armed military drones using neural networks for stabilization, navigation, targeting, and tactics would revolutionize armed conflicts in ways impossible to predict.</p>
<p>Though neural networks are being used to oppress in some places they are also being used to save in others. Medical facilities increasingly deploy network systems to detect ailments such as cancer or infectious diseases. Laboratories can use similar networks for modeling complex biomolecules and developing treatments. Neural networks are even being used in traffic light control systems to increase vehicle flow and reduce accidents.</p>
<p>Where will Neural Networks take us next? It&#8217;s hard to say. But it seems evident that the world is caught and will remain in a web of Neurons.</p>

		</div>
	</div>
</div></div></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://www.megaputer.com/2019-news-neural-networks/">What&#8217;s the News with Neural Networks?</a> appeared first on <a rel="nofollow" href="https://www.megaputer.com">Megaputer Intelligence</a>.</p>
]]></content:encoded>
			</item>
		<item>
		<title>Pulse neural networks and microchips</title>
		<link>https://www.megaputer.com/pulse-neural-networks-and-microchips/</link>
		<pubDate>Tue, 20 Jun 2017 14:54:32 +0000</pubDate>
		<dc:creator><![CDATA[Michael Kiselev]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Neural Networks]]></category>

		<guid isPermaLink="false">https://www.megaputer.com/?p=24671</guid>
		<description><![CDATA[<p>A video of Megaputer's own Dr. Michael Kiselev talking about pulse neural networks and microchips. This video is only available in Russian.</p>
<p>The post <a rel="nofollow" href="https://www.megaputer.com/pulse-neural-networks-and-microchips/">Pulse neural networks and microchips</a> appeared first on <a rel="nofollow" href="https://www.megaputer.com">Megaputer Intelligence</a>.</p>
]]></description>
				<content:encoded><![CDATA[<section class="l-section wpb_row height_small"><div class="l-section-h i-cf"><div class="g-cols vc_row type_default valign_top"><div class="vc_col-sm-12 wpb_column vc_column_container"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column ">
		<div class="wpb_wrapper">
			<p>The following is a video of Megaputer&#8217;s own Dr. Michael Kiselev talking about pulse neural networks and microchips. This video is only available in Russian.</p>

		</div>
	</div>
<div class="w-video ratio_16x9"><div class="w-video-h"><iframe src="//www.youtube.com/embed/xrueHGINOig?rel=0&showinfo=0" allowfullscreen="1"></iframe></div></div></div></div></div></div></div></section>
<p>The post <a rel="nofollow" href="https://www.megaputer.com/pulse-neural-networks-and-microchips/">Pulse neural networks and microchips</a> appeared first on <a rel="nofollow" href="https://www.megaputer.com">Megaputer Intelligence</a>.</p>
]]></content:encoded>
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.w3-edge.com/products/

Page Caching using disk: enhanced 
Minified using disk

Served from: www.megaputer.com @ 2026-04-27 11:41:05 by W3 Total Cache
-->