An ongoing list of disjoint thoughts that are worth writing down. Some are old.

Deep Okay-ness

Based off my recent interest in meditation and a blog post from Sasha Chapin, I’ve been thinking about the feeling that many describe as a feeling of “deep okay-ness”, “persistent self love” or “ambient feeling of good”. I think this ought to be the primary goal of most peoples lives– to have a feeling of good that is independent of external circumstances. It seems like a clear way to improve QALYs, or wellbeing, or however you want to frame good conscious experiences, with no downside. We ought to be studying this more!

Sentience and Happiness

In February, I was at yoga and thought about the similarities between sentience and happiness– namely that they’re both impossible to prove to an external agent. Rather, we try to tease out their existence via behavioral evidence, which is imperfect but seems to work alright. I suppose this is clearly true, as a feeling like happiness is dependent on sentience fully, so it’d be impossible to work around the issues of proof. But as I lay there stretching and meditating, I thought it was a bit strange that no one can ever know about others for certain.

Moving

When you first arrive to a place, for the first ~2 months, you are acutely aware of your location. This is part of what makes travel fun, that your brain is constantly receiving stimuli that’s like “woah this is not where I normally am”. This is also what makes it difficult to be a nomad– if you move before there is familiarity, then you are constantly adding an additional layer of stimulus. When this is what you’re after– its good. But often it can be bad

Experimenting with Spending

I’ve a discussion before of the value of experimenting with the way you spend money, and reflecting on the results (by Nick Cammarata and others). Perhaps when you’re in a sort of equilibrium budget, to double spending in a category for a length of time, and determine which areas of your life you may get outsized returns on. I’ve not done this in a scientific way yet, but plan to soon. Analytic happiness I suppose.

Length of Lists

Is a non-automatized list always an arbitrary length?

This is a thought I’ve had for about 1 year, but haven’t spoken with to anyone.

When I was studying theories of causation, there always seemed to be issues when it came to listing causes for an effect. Especially for probabilistic events, like a “person A has lung cancer”, if you were trying to list all of the causes then 1) it would be impossible but 2) you would start with some very obvious things and at some point they would become less probable or less related, without any clear demarcation. A cause that you would list rather early is: “person A’s childhood home was filled with abestos” and perhaps another cause could be “person A smoked 50 cigarettes when they were 30”. But the first is probably more of a cause than the second.

So there are times, when a list (a series of statements with a collective meaning) has an arbitrary length.

Compare this to an automatized list; for example, the axioms of geometry. Five statements completely define a logical system– 4 would fail this and 6 would not provide any additional value (?). The length of this list is non-arbitrary.

So, there are arbitrary length lists, and non-arbitrary length lists, and I think that the axioms give it this non-arbitrariness. But I don’t know. All these definitions would need to be made much clearer.