<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Aadit Patel]]></title><description><![CDATA[Programmer, Data Science, Ads & Finance. Based out of San Francisco, CA]]></description><link>http://aaditpatel.com/</link><generator>Ghost 1.21</generator><lastBuildDate>Sun, 15 Feb 2026 17:48:07 GMT</lastBuildDate><atom:link href="http://aaditpatel.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Using Artificial Intelligence to Induce Colony Formation]]></title><description><![CDATA[Using traditional and selfish q-learning reward/punishment techniques in individual artificial agents, we are able to induce colony formation on aggregate. ]]></description><link>http://aaditpatel.com/artificial-intelligence-colony-formation/</link><guid isPermaLink="false">5a99a0387a90e3352961d3f1</guid><category><![CDATA[AI]]></category><category><![CDATA[q-learning]]></category><category><![CDATA[python]]></category><category><![CDATA[matplotlib]]></category><dc:creator><![CDATA[Aadit Patel]]></dc:creator><pubDate>Sun, 05 Jan 2014 22:45:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h4 id="projectquicklinks">Project QuickLinks</h4>
<ul>
<li><a href="https://www.dropbox.com/s/ygybxyba2qtenk0/Colony%20Forming%20Animats%20FINAL%20REPORT.pdf?dl=0">Project Report</a></li>
<li><a href="https://github.com/aadit/colony-forming-animats">GitHub Project Source Code</a></li>
</ul>
<h3 id="intro">Intro</h3>
<p>As part of my M.S. in Computer Science at UCLA, I partook in a class called Animats, in which we studied artificial agents in software that had capabilities to learn, adapt to, and interact with their respective software environments. The main concept in this course was the study of the emergence of what seemingly appears to be a higher order intelligence from what is actually an underlying simplistic level of intelligence.</p>
<p>As part of this course, we were tasked with proposing and implement an aritifical agent that could demonstrate this concept. This blog post simply gives a high level overview of my project, approach, and results.</p>
<h3 id="theproject">The Project</h3>
<p>For this project, I wanted to see if I could induce colony-like behavior from self-interested individual artificial agents, without the use of communication between agents.</p>
<p>What exactly does this jargon mean? In more concrete terms, I want ants (in software) that have no way of communicating with each other to <em>appear</em> like they are communicating by forming colonies. A colony, in the context of this project, is defined as a centralized location where some ants bring resources to and other ants &quot;hang out to eat&quot;.</p>
<p>To simplify, our project made some underlying <strong>assumptions</strong>:</p>
<ul>
<li>
<p>The environment (2D plane) where the ants reside consists of 4 food types (e.g. sugars, proteins, vitamins, and water) that an ant will need to consume to survive. There are food generators and food objects. Food generators generate food of a given type at set intervals.</p>
</li>
<li>
<p>Ants have gradient sensors that tell them in which direction (North, South, East, or West) from their current position each food type is located. However, ants will only &quot;target&quot; the food type which is currently the most depleted. For example, if the ant is low on water, it will ignore seeking out sugars, proteins and vitamins and only head towards a water source in the environment.</p>
</li>
<li>
<p>Ants have the ability to pick up food, move around with food in their jaws, and drop food back into the environment. They can also bite into food to eat. When the ant eats food, it replenishes the ant's energy (positive reward), but if an ant were to drop one piece of food type onto another piece of food type (e.g. water and sugar) and eat those two foods together, it would receive bonus energy which exceeds the sum of the energy that would have been gained by eating each food type individually. Therefore, there is great incentive for the ant to collect all the food in one location before biting down to eat it.</p>
</li>
<li>
<p>Moving around the environment and picking up and dropping food utilizes energy (negative reward or penalizer).</p>
</li>
<li>
<p>Each ant in the environment is independent of one another (i.e. it has its own brain, its own self interests and is not part of some sort of colony). Each has its own internal states and actions (more on this later).</p>
</li>
</ul>
<p>The goal is to see if we can construct a system in which the ants will learn that in order to maximize their individual energy, it is most beneficial to have some ants pick up the 4 food types from the environment and bring them into one central location, where they can consistently consume multiple food types at once. As a byproduct, this will also increase the net energy stored in all the ants collectively, since now the majority of ants will not have to spend energy to scavenge for the environemnt for food.</p>
<p>Also, remember, we want to do this all without the ants explicitly communicating which other. The underlying intelligence should be rather minimal.</p>
<p>So how do we accomplish this?</p>
<h3 id="theapproachqlearning">The Approach: Q-Learning</h3>
<p>We modeled each individual ant as a <a href="http://en.wikipedia.org/wiki/Q-learning">Q-Learning</a> agent. A Q-Leaner learns by exploring the possible actions it can take at any given state. Once a sequence of actions leads to a reward, the corresponding state-actions' Q-values are updated. Over time, the optimal policy (sequence of actions) will be learned by the agent in order to maximize its reward. For more information, check out this video on <a href="https://www.youtube.com/watch?v=w33Lplx49_A">reinforcement learning</a>.</p>
<p>For our ants, we have a finite amount of states and actions. At any given time, the internal <strong>state</strong> of the ant is represented by a bitmap representing the following:</p>
<p><strong>1. Target Food</strong> - the food corresponding to the food type that the ant is lowest on. An ant cannot target a food type if it is currently holding that a food of that type (2 bits to represent the 4 food types).</p>
<p><strong>2. Holding Food</strong> - whether or not an ant is holding a food item of a certain type (4 bits, one for each food type).</p>
<p><strong>3. On Food</strong> - whether or not an ant is currently on top of a food of a certain type in the environment (4 bits, one for each food type).</p>
<p><strong>4. Gradient</strong>  - for each food type, the direction (North, South, East, West) that leads to the closest food object in the environment (8 bits, 2 for each food type).</p>
<p>In addition to states, we have actions. An ant is able to perform any of the following <strong>actions:</strong></p>
<ol>
<li>Move North</li>
<li>Move South</li>
<li>Move East</li>
<li>Move West</li>
<li>Eat Food</li>
<li>Pickup Food (only if there is no food curretly picked up)</li>
<li>Drop Food (only if there is food currently picked up)</li>
</ol>
<p>Therefore, the state space is 2^18 unique states while the action space is 7 unique actions. Even though this space is relatively large to explore, Q-Learning is a good candidate for solving this problem since it can find the optimal-policy solution without necessarily needing to visit the optimal-policy (this is called off-policy learning).</p>
<h3 id="diditwork">Did It Work?</h3>
<p>For the project, we programmed six different simulation scenarios to test our hypothesis, but I'll only talk about the two most interesting results.</p>
<h5 id="scenarioi4distributedfoodsources">Scenario I: 4 Distributed Food Sources</h5>
<iframe width="560" height="315" src="//www.youtube.com/embed/z23c7ERFXh8" frameborder="0" allowfullscreen></iframe>
<p>For the first scenario, we placed the 4 food types in opposite sides of the 2D environment and let the ants run through their simulation. You can see a visualization of this in the video above. Over time, a few ants pick up food from the respective food sources and bring them to the center of the environment, where we then see a colony of ants start to form (multiple ants and food sources occupying the same small region of space). What's interesting to note here is that not every ant needed to learn the optimal policy of picking up food and bringing it to a centralized location. As long as a few ants learned this altruistic policy, the other ants could &quot;freeload&quot; off of the colony's centralized resources without needing to learn the policy themselves.</p>
<h4 id="scenarioii4distributedfoodsourcesunlimitedfood">Scenario II: 4 Distributed Food Sources -- Unlimited Food</h4>
<iframe width="560" height="315" src="//www.youtube.com/embed/A5l0J79_6KY" frameborder="0" allowfullscreen></iframe>
<p>In this second scenario, we place only 1 instance of each food type in opposite sides of the environment. However, these food objects are infinitely large (i.e. they cannot be totally consumed by the ants) and thus can be moved around in entirety. From the animation, we see that the ants initially start by visiting each food source on the map individiually and eating. Over time, a few ants once again learn to pick up and drop the food in a centralized location to minimize their energy expenditure in having to traverse the environment. What's unique in this scenario is that the colony itself moves around the environment. This may be due to a few ants who are still exploring their state-action space; however, it is interesting to note that as soon as one food type is moved away from the colony by a &quot;rogue&quot; ant, other &quot;better trained&quot; ants quickly move the food type back to the colony area or move the other three food types closer to the displaced food type in order to maintain the colony.</p>
<p>Overall, it seems from these initial experiments that the simple Q-Learning approach to colony formation works with self-interested agents, given the right conditions. Even though none of the ants in the simulations were able to communicate with each other, the ants formed colonies near the enter of the distributed food locations.</p>
<p>If you're interested in an more in-depth look at this project, feel free to browse the project <a href="https://github.com/aadit/colony-forming-animats">source code</a> and/or download the <a href="https://www.dropbox.com/s/ygybxyba2qtenk0/Colony%20Forming%20Animats%20FINAL%20REPORT.pdf?dl=0">project report</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Creating My Own Private Social Network - Proof of Concept]]></title><description><![CDATA[Working out the details in creating a social network using PHP, Laravel, and a sqlite database. Features include news feed, @ mentions, notifications and comments.]]></description><link>http://aaditpatel.com/creating-my-own-private-social-network-proof-of-concept/</link><guid isPermaLink="false">5a99a0387a90e3352961d3ef</guid><category><![CDATA[social networks]]></category><category><![CDATA[php]]></category><category><![CDATA[laravel]]></category><category><![CDATA[sqlite3]]></category><dc:creator><![CDATA[Aadit Patel]]></dc:creator><pubDate>Tue, 19 Feb 2013 02:32:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam1.png" alt="The Vira Family"></p>
<p>If you were to venture over to virafamily.com, you'd see a Home page with an image slider of my family, a bunch of nonsensical placeholder text, a couple of unfurnished web pages (the About and Family Tree pages), and a Dashboard page, which you must be logged in to access.</p>
<p>Back in 2012, my New Year's Resolution (or more like New Year's goal) was to create a website for my extended family. The idea was relatively simple: create an online space where I could put up photos from family gatherings, design out a family tree, and maybe even come up with brief bios for some of our deceased family members. I bought the domain, put up the content, and watched as no one in my family was remotely interested in visiting the website.</p>
<p>During this past Christmas Holiday, I decided to revamp the website in order to make it more interactive for my family. It was an idea I had been toying with for a little while now -- a completely private social network. There were a couple of motivating factors behind this idea:</p>
<ul>
<li>
<p><strong>Facebook can't filter content (yet).</strong> For example, Facebook doesn't allow me to filter my news feed (i.e. I can't filter my news feed to see content from my just my friends vs. just my family). I realize Google+ solves this by introducing circles, but I feel like it's too advanced for the majority of my family to understand.</p>
</li>
<li>
<p><strong>Privacy.</strong> Sharing intimate details about your family or personal life on your own server and database is simply more private than using third-party social networking sites.</p>
</li>
</ul>
<h2 id="creatingthenetwork">Creating the Network</h2>
<p>Creating a private network as opposed to a public one has its perks that allow the application logic to be much more straightforward and simple; here are some underlying assumptions about creating the application:</p>
<ul>
<li>
<p>First and foremost, the network is closed to the public, so there should be no sign up process. The only way to access the application is by being invited by another user (e.g. another family member).</p>
</li>
<li>
<p>Since this is a private network, everyone, in theory, should know everyone else and therefore be connected with everyone else (i.e. all nodes are connected). There is no need for complex privacy settings, etc. There are no individualized &quot;walls&quot;. All users post to a single feed, the Family Feed. (I've implemented Twitter-like @ tags to target posts toward a specific user).</p>
</li>
</ul>
<p>I ended up creating the application in about 4 days during the holidays. I used the Laravel PHP Framework, which comes packaged with a great object-relational mapper, and utilized a simple SQLite database to store posts/comments/notifications/profiles. In January, I also created a RESTful backend API for the app's resources, which allowed me to create an Android version of the application, as well. The application contains a central news feed, Twitter-like @ tags, a notification engine, and user profiles.</p>
<h2 id="features">Features</h2>
<p>Here are some screencaps highlighting some of the ViraFamily features:</p>
<h4 id="virafamilydashboarddisplayingthefamilynewsfeedwhichisthecentralcomponentoftheapplication"><em>Vira Family Dashboard displaying the Family News Feed, which is the central component of the application.</em></h4>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam1.png" alt="The Vira Family"></p>
<h4 id="userscanaddgoalsontotheirprofileswhicharevisiblebytherestofthefamily"><em>Users can add goals onto their profiles, which are visible by the rest of the family.</em></h4>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam2.png" alt="The Vira Family"></p>
<h4 id="twitterliketagsallowuserstoaddressspecificusersintheirposts"><em>Twitter-like @tags allow users to address specific users in their posts.</em></h4>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam3.png" alt="The Vira Family"></p>
<h4 id="thenotificationenginealertsuserswhentheyarementionedalongwithotheruseractivity"><em>The notification engine alerts users when they are mentioned along with other user activity.</em></h4>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam3.png" alt="The Vira Family"></p>
<h4 id="thenotificationenginealertsuserswhentheyarementionedalongwithotheruseractivity"><em>The notification engine alerts users when they are mentioned along with other user activity.</em></h4>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam4.png" alt="The Vira Family"></p>
<h4 id="thestandaloneandroidapplicationconnectstothedatabasethrougharestfulapi"><em>The standalone Android application connects to the database through a RESTful API.</em></h4>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/virafam5.png" alt="The Vira Family"></p>
<h2 id="conclusion">Conclusion</h2>
<p>So far the application has been up and running for about 2.5 months at the time of writing. My family uses the application fairly regularly, especially as more of them are downloading the Android app. I still need to develop the iPhone version of the app for the few family members who own iPhones, but all in all, the site is much more successful than my previous attempt last year.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Chrome Extension Download Stats]]></title><description><![CDATA[With only a little less than a month of being on the Chrome store, my extension has received over 2,000 users and over 2,500  impressions per day. ]]></description><link>http://aaditpatel.com/chrome-extension-download-stats/</link><guid isPermaLink="false">5a99a0387a90e3352961d3ee</guid><category><![CDATA[video]]></category><category><![CDATA[youtube]]></category><category><![CDATA[chrome app]]></category><category><![CDATA[youtube sort]]></category><dc:creator><![CDATA[Aadit Patel]]></dc:creator><pubDate>Sat, 12 Jan 2013 18:30:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>After less than a month of being on the Chrome Web App store, my YouTube Chrome extension has gotten some relatively decent exposure for being such a small app that was developed in less than 3 hours. As of date, it currently has ~2k users and is getting about 2k-2.5k impressions per day. Interestingly enough, impressions and installations are not necessarily correlated, as seen in the Google's Developers Dashboard image below. YouTube also recently added back in the functionality to sort videos, so it looks like this extension may be obsolete, at least until the next YouTube update.</p>
<p><img src="http://aaditpatel.com/content/images/2014/Mar/chrome_app_stats.png" alt="Chrome Stats"></p>
</div>]]></content:encoded></item><item><title><![CDATA[Chrome Extension for YouTube Video Sorting]]></title><description><![CDATA[Chrome extension for adding the YouTube sort features back into Google Chrome. ]]></description><link>http://aaditpatel.com/chrome-extension-for-youtube-video-sorting/</link><guid isPermaLink="false">5a99a0387a90e3352961d3ed</guid><dc:creator><![CDATA[Aadit Patel]]></dc:creator><pubDate>Mon, 03 Dec 2012 01:30:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Usually when I hear a new artist I like on Pandora, I head over to YouTube to find some of their other hits. Until recently, it was rather easy to sort the search results by View Count or Average Rating to find the most popular songs, but it looks like with YouTube's latest release all video sorting has been disabled.</p>
<p>So naturally, I made a Google Chrome Extension which adds those capabilities right back. Installation is pretty straight forward and it adds sort criteria whenever you visit any youtube.com/results page.</p>
<p>You can grab the extension from the link below!</p>
<p><a href="https://chrome.google.com/webstore/detail/video-sorter-for-youtube/feamkbjehbbidedhlnlcibjoejmbjlgn">https://chrome.google.com/webstore/detail/video-sorter-for-youtube/feamkbjehbbidedhlnlcibjoejmbjlgn</a></p>
</div>]]></content:encoded></item></channel></rss>