<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Amruth</title>
	<atom:link href="https://amruth.in/feed/" rel="self" type="application/rss+xml" />
	<link>https://amruth.in</link>
	<description>Writer . Researcher . Entrepreneur</description>
	<lastBuildDate>Fri, 03 Jan 2025 07:02:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
	<item>
		<title>Quantum Bayesianism</title>
		<link>https://amruth.in/science/quantum-bayesianism/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Wed, 28 Aug 2024 11:51:51 +0000</pubDate>
				<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2040</guid>

					<description><![CDATA[A radical new interpretation of quantum mechanics.]]></description>
										<content:encoded><![CDATA[<h3 class="header-anchor-post">The Quantum Enigma</h3>
<p>“I think I can safely say that nobody <em>really</em> understands quantum mechanics,” said Richard Feynman in 1964. One year before he got the Nobel prize for his work in quantum mechanics.</p>
<p><em>What was he referring to and do we still not understand it?</em></p>
<p>Let’s briefly explore the key ideas of quantum mechanics to set the context and the language for diving deeper.</p>
<p>In quantum mechanics, we need two kinds of information to describe the fundamental particles that make up our universe:</p>
<ol>
<li><strong>Fundamental properties:</strong> Intrinsic characteristics that do not change with time &#8211; mass, charge, etc.</li>
<li><strong>Quantum states:</strong> Characteristics of the particle that change with time &#8211; position, momentum, etc.</li>
</ol>
<p>Whenever we <strong>interact</strong> with a particle in the <strong>real world</strong>, we <em>always</em> find it in a definite and singular quantum state. This is important to state since there is a lot of confusion about this on the internet.</p>
<p>However, if we want to <strong>predict</strong> which of the many possible states it will be in, we find it impossible to do so with 100% accuracy. The equations of quantum mechanics can only tell us the <strong>probability</strong> of finding it in each possible state.</p>
<p>This isn’t strange. There are examples like this even in classical physics. If we want to predict a coin toss before it lands, the best we can do is calculate the probability of getting heads and tails &#8211; 50% each.</p>
<p>However, in classical mechanics, this uncertainty comes from not knowing all the information required to predict a coin toss. If we know all the information, it is possible to predict the outcome of a coin toss with 100% accuracy.</p>
<p>This is not the case in quantum mechanics.</p>
<p>In quantum mechanics, we use a mathematical quantity called <strong>wave function</strong> to calculate the probability of finding a particle in one state or another. Most of our difficulty in understanding quantum mechanics comes from not knowing how to interpret what this wave function means.</p>
<p><em>But what’s the big difficulty?</em> <em>Why don’t we interpret it the same way as we interpret probability in classical mechanics?</em></p>
<p>Because some experiments, like the famous double-slit experiment, showed us that different possible states of a particle can interact with each other to change how the particle behaves. In the case of a coin toss, it’s like the “possibility of heads” interacting with the “possibility of tails”, to give you new answers for the probability of getting heads or tails when the coin lands.</p>
<p><em>But wait,</em> you might say, <em>it’s just a mathematical possibility, not two separate real coins &#8211; one head, one tail &#8211; how will they interact with each other?</em></p>
<p>That’s the puzzle.</p>
<p>Almost all the strangeness of quantum mechanics comes from this interaction between “mathematical possibilities” that are not even supposed to be real in classical mechanics.</p>
<p><em>Why and how do they interact? Does this mean all mathematical possibilities are real in some sense or some other universe?</em></p>
<p><em>If they are real, why don’t other limitations that apply to real things apply to them?</em> Like the limitation Einstein discovered &#8211; “no information can travel faster than the speed of light between two real particles” which is broken when entangled particles instantly exchange information even if they are light years away.</p>
<p><em>If they are neither real nor purely mathematical, are they something in between?</em></p>
<p>I can summarize the answer to all these questions by borrowing Feynman’s words from 1964 &#8211; “I think I can safely say that nobody <em>really</em> understands.”</p>
<h3 class="header-anchor-post">Enter QBism</h3>
<p>For the longest time, I struggled to find a clear enough articulation of what QBism says. Eventually, as I understood it better, I came up with one:</p>
<p>QBism argues that a wave function represents the <strong>real information</strong> we have, as observers, about various <strong>possible states</strong> of a particle.</p>
<p>To truly understand QBism, we must remind ourselves that no particle (or person) ever finds any other particle in a superposition of multiple possible states in the real world. It only happens in the “mathematical model” we must construct to predict what we may observe in the real world. It’s <em>as if</em> they exist in a superposition of multiple states before we interact with them. <em>As-if.</em></p>
<p>Most particles in the universe do not go around predicting each other. As far as they are concerned, there’s no strangeness or spookiness in our universe. Every particle always exists in definite and singular states whenever they observe each other.</p>
<p>The only entities that encounter spookiness are the ones trying to predict future outcomes &#8211; like us. To do this, they must store and manipulate information <em>somewhere</em> &#8211; brains, computers, something else &#8211; making it real.</p>
<p>If quantum mechanics describes this real information about particles and NOT the particles themselves, then all the spookiness disappears!</p>
<p><em>Superposition?</em> Of course, our information about a particle can exist in a superposition of two states. The contradiction disappears.</p>
<p><em>Faster-than-light communication between entangled particles?</em> No matter how far two particles travel, our information about them remains within us. So, there’s no need for faster-than-light communication to update our information about the two entangled particles. Contradiction disappears.</p>
<p>All we have to give up in exchange is our belief that physics describes objective reality, not just our information about it.</p>
<h2 class="header-anchor-post">Not Again!</h2>
<p><em>“Not again! &#x1f644;”</em> This is probably what you’re thinking if you’ve read my previous article about <strong><a href="https://rogue42.substack.com/p/predictive-processing" rel="nofollow ugc noopener">Predictive Processing</a></strong><a href="https://rogue42.substack.com/p/predictive-processing" rel="nofollow ugc noopener"> &#8211; an emerging view of our brain as a prediction machine that perceives its own predictions as reality</a>.</p>
<p>Well, what can I say? I am obsessed with ideas that question the very nature of reality we perceive (neuroscience) or live in (physics).</p>
<p>If we look at history, every time science seemed stuck and unable to progress, it was eventually set free by ideas that questioned and altered our understanding of the very nature of reality. <em>Why should it be different this time around?</em></p>
<p>As always, I hope this post sparks enough excitement in you to explore this beautiful idea on your own. And if you ever need another curious mind to give you company on this journey &#8211; hit me up! &#x2728;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Meet Avi</title>
		<link>https://amruth.in/art/meet-avi/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Fri, 16 Aug 2024 13:42:57 +0000</pubDate>
				<category><![CDATA[Art]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2119</guid>

					<description><![CDATA[Meet Avi - a character from my new fantasy fiction novel.]]></description>
										<content:encoded><![CDATA[<p>Meet Avi &#8211; a character from my new fantasy fiction novel.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Predictive Processing</title>
		<link>https://amruth.in/science/predictive-processing/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Thu, 15 Aug 2024 19:10:39 +0000</pubDate>
				<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2025</guid>

					<description><![CDATA[The emerging view of our brain as a prediction machine.]]></description>
										<content:encoded><![CDATA[<h2 data-pm-slice="1 1 []">The Idea</h2>
<p>What happens in our brains when we interact with the world around us? Let’s say when we see an apple in front of our eyes.</p>
<p>In the traditional view, light from the apple produces various sensations in our eyes. Our eyes then send these sensations to different layers of neurons within our brain. Each layer identifies different features of these sensations &#8211; colour, shape, movement, etc. Finally, all these individual features are integrated into one singular experience of “seeing an apple”. This is called <strong>perception.</strong></p>
<p><strong>Predictive Processing (PP)</strong> flips this model. It reimagines our brain as a prediction machine, always predicting its own future. But its future is influenced by various sensations produced in our sense organs when we interact with reality. So, to predict its own future, it must also predict how the outside world will interact with it.</p>
<p>Every time it generates these predictions, it sends them to all the relevant neurons connected to our sense organs. And every time we interact with reality, these neurons check whether the predictions they receive match the sensations produced by reality.</p>
<p>If they match, these neurons stay silent and our brain preserves its beliefs about external reality. These beliefs are called <strong>priors</strong>. If they don’t match, these neurons send an error signal to the neurons above them in the hierarchy. This is called <strong>prediction error</strong>. Our brain uses these prediction errors to update its priors and checks if the new priors result in accurate predictions. This process goes on over and over again until all the prediction errors become small enough to be ignored.</p>
<p>This is the view presented by Predictive Processing.</p>
<h2>Why is it Revolutionary?</h2>
<p>If you compare the two views I shared, you’ll notice something missing in the second picture: perception.</p>
<p>You’ll see that in this new view, our brain can&#8217;t perceive any sensations produced by our interactions with reality. Because when our predictions are right, these sensations are not even passed on to other neurons in our brains. And when they are incorrect, only the resulting prediction error is passed on.</p>
<p>So, most regions of our brain only receive two kinds of information:</p>
<ol>
<li>Predictions generated by neurons above them in the hierarchy</li>
<li>Prediction errors sent by neurons below them</li>
</ol>
<p>So our perception can only come from one of these.</p>
<p>Predictive Processing argues that all we ever perceive in life &#8211; colours, tones, scents &#8211; are the predictions generated by our own brains and NOT sensations produced by the outside world.</p>
<p>It’s like living in a Matrix-style simulation generated by our own brains. A simulation whose purpose is not to give us an accurate picture of our reality, but to correctly predict what sensory inputs are generated when we interact with it.</p>
<p>This is a revolutionary idea with, as we shall see later, far-reaching consequences. But first, how do we know it’s the right idea?</p>
<h2>The Clues</h2>
<p>Well, we don’t. But several clues are pointing us in this direction. Let’s look at a few interesting ones.</p>
<p><strong>#1 The mystery of our brain’s energy efficiency</strong></p>
<p>You might have heard of AI systems consuming enormous computing power and energy as they are beginning to get smarter. Yet, we all have a brain doing far more complex tasks with just a fraction of the energy.</p>
<p><em>How is our brain pulling this off? What makes it SO energy efficient?</em></p>
<p>In 2021, a team of researchers decided to investigate this question by making a small change to a class of neural networks called Recurrent Neural Networks (RNN).</p>
<p>In our brain, the firing of neurons consumes the most energy. So, to minimize energy consumption, we’d have to minimize the amount of firing. In RNNs (and in all neural networks), the amount of firing is influenced by the strength of the connection between its neurons &#8211; also called <strong>weights</strong>. So the researchers forced the RNN to perform its task by simultaneously trying to achieve the smallest possible weights between its neurons.</p>
<p>With just this one extra constraint, the network started organising itself into an architecture where some of its neurons began predicting the signals received by other neurons, which then learnt to fire only if these predictions were wrong. Initially, when the RNN had no data to make accurate predictions, it generated a lot of prediction errors, resulting in a lot of firing. However, as its predictions improved, the firing rates decreased, minimising the energy consumed.</p>
<p>This is a pretty significant clue &#8211; forcing regular neural networks to conserve energy naturally results in a predictive processing architecture.</p>
<p><strong>#2 A brain slice that learnt to play a videogame</strong></p>
<p>In a fascinating experiment from 2022, now popular as “Dishbrain”, researchers connected a bunch of neurons grown in a petri dish to a simple video game &#8211; Pong, where you control a paddle to hit a ball back and forth.</p>
<p>One part of the brain tissue was fed signals about the ball’s position and the signals from a different part of the tissue were used to move the paddle. In Pong, every time you miss the ball, it resets from a random location. To simulate this, researchers sent random unpredictable signals to the brain tissue whenever it missed the ball.</p>
<p>At first, the brain tissue was moving the paddle in random directions. After all, it was just a small bunch of neurons without the rest of the brain to tell it what to do.</p>
<p>However, over the next few minutes, the brain tissue slowly learned to coordinate its internal electrical activity to avoid missing the ball &#8211; sustaining longer and longer rallies with time.</p>
<p><em>But how? It’s just a slice of brain tissue in a petri dish!</em></p>
<p>Predictive processing has no issues explaining the results of this experiment. Each layer of neurons is always trying to predict the electrical activity of the neurons in the layer below. Every time the paddle misses the ball, it receives a random unpredictable signal that generates a prediction error. In response, the neurons reorganize their connections to minimize prediction errors in the future. Many rounds of reorganization later, the brain tissue ends up getting connected in a way that minimizes missing the ball.</p>
<p>Now, alternatives to predictive processing can explain the results of the Dishbrain experiment only if we assume that random unpredictable signals somehow count as “punishment” and predictable signals count as “reward”. Yet, they offer no real reason for why this should be the case without invoking predictive processing.</p>
<p><strong>#3 Natural emergence of Self and subjective experience</strong></p>
<p>One of the biggest mysteries of neuroscience is explaining how a bunch of neurons can give rise to the rich subjective experiences of our mind.</p>
<p>Let’s take the colour red. Physics tells us that each colour is related to the wavelength of light that hits our eye. A camera and our eye will both receive the same wavelength.</p>
<p>Yet, we perceive “<em>something more</em>” when we perceive red. After all, a colour-blind person sees the same wavelength but perceives a “redness” that is different from what others perceive. This “something more” is called <strong>qualia</strong>. It’s the redness of red and the sweetness of sugar. Qualia forms the building blocks of our subjective inner reality.</p>
<p>While many efforts are underway to understand how our brain produces qualia, predictive processing offers the simplest explanation among its rivals.</p>
<p>In predictive processing, the actual wavelength of red light is immaterial to our perception since we never directly perceive the sensations produced by it. Instead, our perceptions come from our brain&#8217;s predictions. These predictions are inherently subjective since the brain activity that helps me predict the sensation of red light might differ from the brain activity that helps you predict the same sensation. Thus, qualia.</p>
<p>By the same logic, our brains don&#8217;t directly perceive the signals from our own bodies either. Instead, we only perceive the models or simulations our brain creates to predict incoming signals from our bodies. The same goes for our brains. Some regions predict the electrical activity of others. These predictions become the basis for our subjective sense of &#8220;<strong>self</strong>.&#8221;</p>
<p>So much explanatory power emerges from a simple assumption: <em>our neurons don&#8217;t transmit raw signals, but only the prediction errors that arise from those signals.</em></p>
<p>Of course, this simplicity alone doesn&#8217;t make it right, but it does make it an appealing idea to explore, no?</p>
<h2>The Challenges</h2>
<p>As you would’ve guessed, we’d already be hailing predictive processing as the new default of neuroscience if it didn’t have challenges waiting for a fix. Let’s look at them in this section, hoping that they may inspire us to attempt solving them.</p>
<p><strong>&#x26a0; Lack of testable predictions</strong></p>
<p>The biggest challenge and opportunity for predictive processing is to define clear, testable predictions that uniquely arise from predictive processing, distinguishing it from other theories. Without such predictions, critics argue that it is still too broad and general, maybe even unfalsifiable in its current form.</p>
<p><strong>&#x26a0; Anatomical implementation</strong></p>
<p>While we now have computational models to implement predictive processing, we still lack a clear explanation for how it may be implemented within the brain. Most of its explanatory power comes from a “black box algorithm” level of understanding.</p>
<p>In its defence, none of its alternatives are any closer.</p>
<p><strong>&#x26a0; Mechanism for Active inference</strong></p>
<p>Active inference, a key component of predictive processing, suggests that organisms don’t just update their priors to reduce prediction errors but also take actions to fulfil predictions (e.g., moving a finger to confirm a prediction that “my finger will move now”). How this can happen remains unanswered.</p>
<p><strong>&#x26a0; Integration with well-established cognitive principles</strong></p>
<p>Predictive processing is yet to be integrated with other well-established cognitive principles, such as reinforcement learning (learning by trial and error with rewards), Hebbian learning (neurons that fire together, wire together) and embodied cognition (cognition is not just something the brain does, it’s something the whole body does).</p>
<h2>The Possibilities</h2>
<p>Despite its challenges, predictive processing holds the potential to one day become the default theory of how our brains work. Here are the possibilities I’m most excited about &#8211;</p>
<p><strong>✦</strong> <strong>Unified Framework for Cognition</strong></p>
<p>Starting with just one assumption that our brain operates on the principle of minimizing prediction error, it provides a unified account of how various features of our mind emerge &#8211; from perception and action to learning and even higher-level processes like decision-making and consciousness.</p>
<p>If we can develop this into a full-fledged testable theory, it might someday emerge as the “theory of everything” of neuroscience.</p>
<p><strong>✦</strong> <strong>A Fundamental Connection with the Physics of Life</strong></p>
<p>The predictive processing framework aligns with the Free Energy Principle, a mathematically grounded theory that explains how biological systems maintain order and resist entropy. This alignment gives it a solid theoretical foundation and connects it to broader principles of thermodynamics and information theory, giving it a strong headstart in explaining why it evolved in the first place.</p>
<p><strong>✦ A New Way to Understand Neurological Conditions</strong></p>
<p>Predictive processing provides new ways to model and simulate several neurological disorders by framing them as disruptions in the brain’s predictive mechanisms. For example, conditions like schizophrenia, autism, and anxiety disorders can be understood as resulting from imbalances in prediction and error correction, offering new avenues for treatment and intervention.</p>
<p><strong>✦ A Gateway to Artificial Sentience</strong></p>
<p>By providing a computational model for how subjective experiences could emerge as a natural side-effect of minimizing prediction errors generated by signals from objective reality, predictive processing opens the door for creating artificial systems with subjective experiences, a sense of self, and possibly even <strong>consciousness</strong>.</p>
<h2>Closing Thoughts</h2>
<p>Over the years, I have implemented several forms of predictive processing models: black box algorithms, neural networks, models with altered parameters that successfully reproduce several symptoms of autism and schizophrenia, models that failed at reproducing symptoms of ADHD, anxiety and depression…</p>
<p>It has been an exciting journey, peppered with moments of deep frustration when facing its limitations. But I can confidently say that the deeper I’ve dived into this framework, the more I’ve come to bet on its potential.</p>
<p>So, I hope this post sparks enough excitement in you to explore this beautiful idea on your own. And if you ever need another curious mind to give you company on this journey &#8211; hit me up! &#x2728;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Spidy Says</title>
		<link>https://amruth.in/art/spidy/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Tue, 23 Jul 2024 13:11:41 +0000</pubDate>
				<category><![CDATA[Art]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2086</guid>

					<description><![CDATA[Don't let power or responsibility imprison the kid within you.]]></description>
										<content:encoded><![CDATA[<p>Don&#8217;t let power or responsibility imprison the kid within you.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Purpose</title>
		<link>https://amruth.in/psych/purpose/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Mon, 22 Jul 2024 08:08:43 +0000</pubDate>
				<category><![CDATA[Psych]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=1988</guid>

					<description><![CDATA[The 3 ingredients our brain needs to generate the feeling of purpose.]]></description>
										<content:encoded><![CDATA[<p data-pm-slice="1 1 []"><strong>The</strong> dictionary meaning of purpose is &#8211; <em>the reason for which something exists</em>. This is different from a goal. In many ways, a goal is the opposite of purpose. You need conscious effort to <em>chase</em> a goal. Whereas, you need conscious effort to <em>resist</em> a purpose.</p>
<p>Yet, in recent times, we have created a false equivalence between purpose and goal. This has made more and more of us chase goals in response to a feeling of lacking purpose.</p>
<p>So, what’s purpose? <em>It’s a feeling, not a goal.</em></p>
<p>Have you ever felt purposeful even if only for a moment? I hope you have coz it’s hard to describe a feeling using words &amp; I may not do a good job at it. So I’ll try describing its defining features instead:</p>
<ul>
<li>It’s <strong>rewarding</strong> &#8211; it makes you miss it when it’s gone.</li>
<li>It’s <strong>transcendental</strong> &#8211; it evokes a sense of being a part of something larger than yourself.</li>
<li>It’s <strong>clarifying</strong> &#8211; it reduces uncertainty (in decision-making).</li>
</ul>
<p>If a feeling satisfies all these 3 conditions, it’s likely to be the feeling I’m calling “purpose”.</p>
<p>Turns out, 3 things need to happen at the same time for this feeling to emerge:</p>
<ol>
<li>You need to be making a <strong>choice that feels important</strong>.</li>
<li>The <strong>choice triggers some emotions</strong> within you &#8211; the more the better.</li>
<li>The choice <strong>reduces the electrical activity in your brain’s self-focused circuits</strong> &#8211; the circuits that come alive when you’re consciously or unconsciously thinking about yourself &#8211; the lower the better.</li>
</ol>
<p>If you’re rarely feeling purposeful these days, it indicates that you are lacking one or more of these 3 ingredients in your day-to-day life.</p>
<p>Let’s look at some examples for clarity &#8211;</p>
<ul>
<li>Chocolate or Vanilla &#8211; it’s not an important choice + doesn’t trigger strong emotions + makes me think about myself = 0/3 ingredients present.</li>
<li>If I come to hit you, do you block me or not &#8211; it feels important + triggers strong emotions + but, it makes me think about myself = 2/3 ingredients present.</li>
<li>If I come to hit someone you love, do you block me or not &#8211; it feels important + triggers strong emotions + does NOT make you think about yourself = 3/3 ingredients present.</li>
</ul>
<p>How often are you getting to make choices that score 3/3 on these ingredients? If your answer is “not enough”, what can you do to increase such opportunities for yourself?</p>
<p><em>Why should I increase such opportunities?</em> you may ask, <em>why chase purpose at all if it’s just another feeling?</em> You’re right! You don’t need to. But since it is an extremely rewarding feeling, it is hard to not miss it after you’ve tasted it. This is what makes us want to chase this feeling even though we don’t need to.</p>
<p><em>But why is it rewarding in the first place?</em> <strong>Evolution</strong>. If one variant of humans accidentally evolved to find this feeling rewarding, they’d have had a huge survival advantage over other variants who didn’t find this feeling rewarding. <em>But why would it give them a survival advantage?</em> Not obvious, is it? Let’s dive a little deeper &#8211;</p>
<p>Humans are social animals who live in groups. We have been for millions of years. When social animals face an external threat, like a predator chasing them for example, they have a choice &#8211; do I prioritize what’s good for me or what’s good for other members of my tribe?</p>
<p>Have you ever watched a lion hunt wildebeests in a documentary (or irl, if you’re cool like that!)? Lions often target the weakest member of a herd and the herd rarely fights the lions to defend that member. Result &#8211; lions rarely hesitate to prey on the wildebeests.</p>
<p>Hyenas on the other hand, fiercely defend every member of their herd, often by putting themselves at risk. Result &#8211; lions rarely target Hyenas. Individually, each Hyena is doing something that increases their chances of death. Yet, since most Hyenas choose to do the same, it increases their chances of survival as a group.</p>
<p>This is called <strong>Group Fitness</strong>. In social animals, group fitness is more important than individual fitness when it comes to survival of the fittest. By being rewarding, the feeling of purpose directly contributes to an increase in our group fitness even if it may sometimes reduce individual fitness.</p>
<p>There you go! That’s the origin story of the feeling of purpose. What do you think? Is it a feeling you want to chase?</p>
<p>If your answer is yes, you’re naturally wondering &#8211; <em>How can I chase this feeling? What do I do to improve my chances of feeling purpose?</em></p>
<p>It’s simple, but hard &#8211; identify which of the 3 ingredients you are lacking and figure out how you can improve them. That’s all 🙂 Good luck!</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Rebellion</title>
		<link>https://amruth.in/art/rebellion/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Sun, 21 Jul 2024 20:09:40 +0000</pubDate>
				<category><![CDATA[Art]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2045</guid>

					<description><![CDATA[Being yourself in a world designed to make you someone else.]]></description>
										<content:encoded><![CDATA[<p>Rebellion is the price of being yourself in a world determined to make you someone else.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Empathy</title>
		<link>https://amruth.in/psych/empathy/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Sun, 21 Jul 2024 18:27:07 +0000</pubDate>
				<category><![CDATA[Psych]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=1974</guid>

					<description><![CDATA[Cognitive models for the 3 brain processes behind empathy.]]></description>
										<content:encoded><![CDATA[<div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="md-end-block md-p"><span class="md-plain"><strong>Empathy</strong> is the ability to <span style="text-decoration: underline;">experience</span>, <span style="text-decoration: underline;">understand</span> and <span style="text-decoration: underline;">respond helpfully</span> to other people&#8217;s states of mind.</span></p>
<p>It involves 3 processes:</p>
<ol>
<li style="list-style-type: none;">
<ol>
<li><strong>Affective Empathy:</strong> Experiencing what they&#8217;re experiencing. It&#8217;s like catching a cold from someone, but you&#8217;re &#8220;catching&#8221; their emotions and feelings instead.</li>
<li><strong>Cognitive Empathy:</strong> Understanding what they&#8217;re experiencing, how it affects them and how they may respond to it. This involves something called &#8220;Theory of Mind&#8221;.</li>
<li><strong>Empathic Concern:</strong> Urge to help them if your affective and/or cognitive empathy tells you they are struggling. This relates to how selfish or altruistic you are in your interactions with them.</li>
</ol>
</li>
</ol>
<p>We still don&#8217;t know the exact algorithm behind these processes, but here&#8217;s a <strong>high-level cognitive model</strong> that captures the core logic:</p>

		</div>
	</div>
<div class="vc_tta-container" data-vc-action="collapseAll"><div class="vc_general vc_tta vc_tta-accordion vc_tta-color-grey vc_tta-style-classic vc_tta-shape-rounded vc_tta-o-shape-group vc_tta-controls-align-left vc_tta-o-all-clickable"><div class="vc_tta-panels-container"><div class="vc_tta-panels"><div class="vc_tta-panel" id="1645894691692-6b2273d4-e219" data-vc-content=".vc_tta-panel-body"><div class="vc_tta-panel-heading"><h4 class="vc_tta-panel-title vc_tta-controls-icon-position-left"><a href="#1645894691692-6b2273d4-e219" data-vc-accordion data-vc-container=".vc_tta-container"><i class="vc_tta-icon fas fa-theater-masks"></i><span class="vc_tta-title-text">1. Affective Empathy</span><i class="vc_tta-controls-icon vc_tta-controls-icon-plus"></i></a></h4></div><div class="vc_tta-panel-body">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<ol>
<li><strong>Step 1: Perception of Emotional Cues</strong>
<ul>
<li><strong>What:</strong> Noticing relevant cues in the other person&#8217;s facial expressions, body language and behaviour.</li>
<li><strong>Where:</strong> Occipital and temporal lobes, including the fusiform gyrus.</li>
</ul>
</li>
<li><strong>Step 2: Mirror Neuron System Activation</strong>
<ul>
<li><strong>What:</strong> Your mirror neuron system activates many neurons in your brain that normally light up only when YOUR face and body move the way theirs are moving. It&#8217;s as if these neurons are mistaking the other person&#8217;s movements for your own.</li>
<li><strong>Where:</strong> Premotor cortex, inferior parietal lobule.</li>
</ul>
</li>
<li><strong>Step 3: Emotion Recognition System Activation</strong>
<ul>
<li><strong>What:</strong> Identifying the emotions and feelings that normally produce such movements in your face and body.</li>
<li><strong>Where:</strong> Amygdala.</li>
</ul>
</li>
<li><strong>Step 4: Check if it&#8217;s my feelings or theirs</strong>
<ul>
<li><strong>What:</strong> Checking with the system that generates and maintains a model of the other person&#8217;s mind to find out if the identified emotion is theirs or yours.</li>
<li><strong>Where:</strong> Amygdala + Medial prefrontal cortex (mPFC).</li>
</ul>
</li>
<li><strong>Step 5: Experiencing the feeling</strong>
<ul>
<li style="list-style-type: none;"></li>
<li><strong>What:</strong> The identified emotion is communicated to the system generating your subjective experiences, which results in you experiencing what they may be feeling &#8211; the end result of affective empathy.</li>
<li><strong>Where:</strong> Amygdala + Anterior insula.</li>
</ul>
</li>
</ol>

		</div>
	</div>
</div></div><div class="vc_tta-panel" id="1645894691789-5ff14501-6887" data-vc-content=".vc_tta-panel-body"><div class="vc_tta-panel-heading"><h4 class="vc_tta-panel-title vc_tta-controls-icon-position-left"><a href="#1645894691789-5ff14501-6887" data-vc-accordion data-vc-container=".vc_tta-container"><i class="vc_tta-icon fas fa-braille"></i><span class="vc_tta-title-text">2. Cognitive Empathy</span><i class="vc_tta-controls-icon vc_tta-controls-icon-plus"></i></a></h4></div><div class="vc_tta-panel-body">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<ol>
<li><strong>Step 1: Perception of Emotional Cues</strong>
<ul>
<li><strong>What:</strong> Noticing relevant cues in the other person&#8217;s facial expressions, body language and behaviour.</li>
<li><strong>Where:</strong> Occipital and temporal lobes, including the fusiform gyrus.</li>
</ul>
</li>
<li><strong>Step 2: Making Sense of the Cues</strong>
<ul>
<li><strong>What:</strong> Decoding the meaning behind the observed cues to predict what emotion, feeling, intention or thought may be causing them. Ex: &#8220;These expressions mean sadness&#8221;</li>
<li><strong>Where:</strong> Superior temporal sulcus (STS).</li>
</ul>
</li>
<li><strong>Step 3: Modelling their Mind</strong>
<ul>
<li><strong>What:</strong> The decoded cues are sent to the system that generates and maintains a model of the other person&#8217;s mind. The better this model is, the more accurate your predictions about them will be.</li>
<li><strong>Where:</strong> Medial prefrontal cortex (mPFC).</li>
</ul>
</li>
<li><strong>Step 4: Predict how their mind reacts</strong>
<ul>
<li><strong>What:</strong> Playing around with their mental model to simulate how it might react to the states of mind predicted by the deciphered cues.</li>
<li><strong>Where:</strong> STS, mPFC.</li>
</ul>
</li>
<li><strong>Step 5: Understanding their state of mind</strong>
<ul>
<li><strong>What:</strong> Learning from these simulations to understand how their mind might react to its current state, what can make them feel better or worse, etc. This is what results in Cognitive Empathy.</li>
<li><strong>Where:</strong> mPFC, Anterior Cingulate Cortex (ACC), Temporoparietal Junction (TPJ).</li>
</ul>
</li>
</ol>

		</div>
	</div>
</div></div><div class="vc_tta-panel" id="1645894723038-2a791867-7f3d" data-vc-content=".vc_tta-panel-body"><div class="vc_tta-panel-heading"><h4 class="vc_tta-panel-title vc_tta-controls-icon-position-left"><a href="#1645894723038-2a791867-7f3d" data-vc-accordion data-vc-container=".vc_tta-container"><i class="vc_tta-icon fab fa-gratipay"></i><span class="vc_tta-title-text">3. Empathic Concern</span><i class="vc_tta-controls-icon vc_tta-controls-icon-plus"></i></a></h4></div><div class="vc_tta-panel-body">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<ul>
<li><strong>Step 1: Processing Their Mental State</strong>
<ul>
<li><strong>What:</strong> Processing the experiences and understanding resulting from your affective and cognitive empathy systems.</li>
<li><strong>Where:</strong> Medial prefrontal cortex (mPFC), anterior cingulate cortex (ACC).</li>
</ul>
</li>
<li><strong>Step 2: Self vs Other Bias</strong>
<ul>
<li><strong>What:</strong> This information is combined with signals about the state of your mind &amp; body to decide how much you’ll prioritise managing their condition vs your condition. i.e. Altruism vs Selfishness.</li>
<li><strong>Where:</strong> Medial prefrontal cortex (mPFC), anterior cingulate cortex (ACC).</li>
</ul>
</li>
<li><strong>Step 3: Evaluation of Options</strong>
<ul>
<li><strong>What:</strong> Assess the potential actions and their impacts on the other person and yourself. If you can&#8217;t come up with any action that can help them, you are unlikely to help them even if you want to.</li>
<li><strong>Where:</strong> mPFC, ACC, orbitofrontal cortex (OFC).</li>
</ul>
</li>
<li><strong>Step 4: Decision Making</strong>
<ul>
<li><strong>What:</strong> Make a decision based on the above factors, contextualised to your environment, current state and social contexts.</li>
<li><strong>Where:</strong> mPFC, ACC, OFC.</li>
</ul>
</li>
</ul>

		</div>
	</div>
</div></div></div></div></div></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>If any of these steps are disrupted, it affects how you experience empathy.</p>
<p>Here are some examples from the model for Affective Empathy, just to illustrate potential uses for such a model:</p>
<ul>
<li>If Step 1 fails, you may not even notice that something is happening to the person in front of you (ex: in many cases of autism).</li>
<li>If step 2 fails, watching a human being feels more or less the same as watching an object (ex: in severe autism).</li>
<li>If step 3 fails, you may feel something but not know what you’re feeling (ex: in alexithymia).</li>
<li>If step 4 fails, you may confuse their feelings for your own feelings (ex: in many empaths).</li>
<li>If step 5 fails, detecting the other person&#8217;s feelings does not translate into your own experience of sadness (ex: in psychopathy)</li>
</ul>
<p>We can predict what kind of activities or techniques may help you improve your empathy based on which step or steps your difficulties may be coming from.</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_empty_space"   style="height: 12px"><span class="vc_empty_space_inner"></span></div>
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This approach of using cognitive models to predict potential causes of your symptoms and difficulties is still in its early stages, but I love its ability to make specific and testable predictions &#8211; without which, are we really doing science?.</p>

		</div>
	</div>
</div></div></div></div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Calling</title>
		<link>https://amruth.in/art/calling/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Sun, 26 Jun 2022 13:27:53 +0000</pubDate>
				<category><![CDATA[Art]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2101</guid>

					<description><![CDATA[We don't find our calling. We call to it and hope it finds us.]]></description>
										<content:encoded><![CDATA[<p>We don&#8217;t find our calling. We call to it and hope it finds us.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Efference Copies</title>
		<link>https://amruth.in/science/efference-copy/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Sat, 25 Jun 2022 10:14:35 +0000</pubDate>
				<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=1891</guid>

					<description><![CDATA[How does our brain know if our eyes are moving or the world?]]></description>
										<content:encoded><![CDATA[<p class="md-end-block md-p md-focus"><span class="md-plain"><strong>Wanna</strong> see something that&#8217;ll make you mistrust your own vision? Stand in front of a mirror and move your gaze left and right. Maybe shift your gaze back and forth between your eyes. Your eyes don&#8217;t seem to move at all, right? Now record yourself doing this act and watch the video. Your eyeballs move quite vigorously, but you still see them to be stationary! How is this possible?</span></p>
<p class="md-end-block md-p"><span class="md-plain">Historically, researchers believed that our eye temporarily stops processing visual signals when we move our eyes (our eye moves in rapid jerks called &#8216;<strong>saccades</strong>&#8216;) to avoid seeing blurry images of the world around us every time we shift our gaze. This was called &#8216;</span><span class="md-pair-s "><em><span class="md-plain">saccadic blindness</span></em></span><span class="md-plain">&#8216;. However, a clever experiment showed that we indeed process retinal images even during a saccade. A series of vertical lines on a screen was made to move horizontally at speeds fast enough to make them invisible to the naked eye. However, when the viewer moves the eye in the same direction as the movement of the lines, they can temporarily see the lines since the relative speed between the moving eye and the moving lines is reduced. This experiment showed that the brain is processing the images falling on our retina even during a saccade, but somehow suppressing them unless they&#8217;re clear &#8211; &#8216;</span><span class="md-pair-s"><em><span class="md-plain">saccadic suppression&#8217;</span></em></span><span class="md-plain">. If the brain&#8217;s intent is to avoid processing blurry images, then why don&#8217;t we suppress blurry images resulting from external motion? When we&#8217;re looking out of a moving train, for example.</span></p>
<hr />
<p class="md-end-block md-p"><span class="md-plain"><strong>In a 2016 paper</strong> titled &#8216;Neural mechanism of saccadic suppression&#8217;, the authors discuss their experiment to identify how our brain&#8217;s perception of retinal images change during a saccade, specifically in the middle temporal (MT) and the middle superior temporal (MST) cortical areas of the brain. They found that about 66% to 68% of the neurons showed significant differences in the way they process retinal images from a saccade-induced motion vs externally-induced motion (like a train journey). The remaining neurons either didn&#8217;t respond to high-velocity image motion at all or responded equally well in both cases. What surprised them is a saccade-induced reversal in the behaviour of a significant percentage of neurons from the former category. Neurons that lit up when they identified left-to-right motion of an externally-induced image were lighting up when the images moved right-to-left during a saccade! Further, <strong>this reversal started about 70ms before the saccade began</strong>, indicating that there&#8217;s some top-down intervention responsible for saccadic suppression that is linked with the brain&#8217;s decision to initiate a saccade in addition to whatever effects the actual movement of the eye may be contributing.</span></p>
<p class="md-end-block md-p"><span class="md-plain">What could be influencing this alteration in the way our brain processes the signals from our eye during a saccade? The clue lies in the observation that this alteration starts about 70ms </span><span class="md-pair-s "><em><span class="md-plain">before</span></em></span><span class="md-plain"> the actual saccadic movement begins. What comes just before a saccadic movement? To answer this question, let&#8217;s take a small detour into the fascinating world of electric fish.</span></p>
<hr />
<p class="md-end-block md-p"><span class="md-plain"><strong>Mormyrid fish</strong> detect their prey by using electroreceptors on their body to sense small electric fields generated by their prey. However, this is a tricky business for the Mormyrids because they themselves repeatedly generate large electric pulses for navigation and communication (known as Electric Organ Discharges or EODs). These EODs activate its electroreceptors, interfering with the much weaker electrical fields generated by their prey. How then do they go about detecting their prey without confusing themselves all the time? The authors investigate this question using an elegant and elaborate setup.</span></p>
<p class="md-end-block md-p"><span class="md-plain">First, they figure out how to record electrical activity from the neurons in the fish&#8217;s electrosensory lobe (ELL), the region within the fish brain where signals from its electroreceptors first get processed. Then, they manage to mimic the fish&#8217;s EODs by placing a small electric dipole within the water and near the electroreceptors on its scales. Then, they paralyse the muscles which generate EODs without interfering with the fish&#8217;s ability to send commands to these muscles. They also figured out how to tell when a fish was sending a command to discharge EODs. Lastly, they generate fake prey-like electric fields within the water to see how it&#8217;s processed by the Mormyrid ELL.</span></p>
<p class="md-end-block md-p"><span class="md-plain">What they found is that the fish&#8217;s ability to detect the fake prey reduced drastically whenever they sent an artificially generated EOD that was not synced with the fish&#8217;s command. However, the fish regained its ability for prey detection the moment they synced their EODs to predictably follow the fish&#8217;s command. Their best guess? Whenever the fish sends out a command to generate an EOD, it also tells the neurons in its ELL to filter out the electrical signature of its own EOD from the total signal received at its electroreceptors. Think of it as a negative image of its own EOD&#8217;s electrical signature that gets sent to the neurons in its ELL. When this negative image gets added to the total signal coming from its electroreceptors, it filters out the EOD from the electrical signature generated by external electrical activities in its environment (like the presence of a prey). This is akin to how your active-noise cancellation headphones work.</span></p>
<hr />
<p class="md-end-block md-p"><span class="md-plain"><strong>If</strong> this is how electric fish cancel out the sensory signals resulting from their own actions, could this be something our brain does too? The answer is yes.</span></p>
<p class="md-end-block md-p"><span class="md-plain">Every time we initiate a movement of any kind, our brain sends a copy of that command (called an &#8220;<strong>efference copy</strong>&#8220;) to all the relevant regions of the brain that are involved in sensory perception. Why? To help them predict and adapt to the incoming signals generated from our own actions. Such information about our own commands for action plays a different role in prediction-based models when compared to extraction-based models of the brain. </span></p>
<p class="md-end-block md-p"><span class="md-plain">In extraction-based models, such information helps our brain identify which component of the incoming sensory signal must be attributed to the consequences of one&#8217;s own actions. The primary benefit of this information is <strong>accurate attribution</strong>. This, as we&#8217;ll see later, is the precursor to our <strong>sense of self</strong>, at least of one kind (there are several kinds of self we possess). </span></p>
<p class="md-end-block md-p"><span class="md-plain">On the other hand, in prediction-based models, such information helps our brain predict future sensory signals that are a direct consequence of our own actions. The primary benefit of this information is the <strong>accurate processing of cause and effect</strong>. This is the precursor to our <strong>sense of agency</strong>. The feeling of being the cause of actions or <strong>free will</strong>.</span></p>
<p class="md-end-block md-p"><span class="md-plain md-expand">Does this mean our sense of self is stronger in systems of the brain that employ extraction-based models? Conscious thinking, for example. On the same lines, is our sense of agency stronger in systems that have prediction-based models (like perception)? Is it possible to have one without the other if we can find an activity that exclusively uses systems that are extraction-based or prediction-based? How can we test these predictions? More on these later.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Be Yourselves</title>
		<link>https://amruth.in/art/be-yourselves/</link>
		
		<dc:creator><![CDATA[amruth]]></dc:creator>
		<pubDate>Fri, 24 Jun 2022 13:42:18 +0000</pubDate>
				<category><![CDATA[Art]]></category>
		<guid isPermaLink="false">https://amruth.in/?p=2117</guid>

					<description><![CDATA[Be yourselves. All your selves. At the same time.]]></description>
										<content:encoded><![CDATA[<p>Be yourselves. All your selves. At the same time.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
