vscode ssh plugin doesn't work any more, back to nano :despair:
i guess i should just clone it locally like a normal person i was way too spoiled by this. i don't need to send every keystroke
vscode ssh plugin doesn't work any more, back to nano :despair:
i guess i should just clone it locally like a normal person i was way too spoiled by this. i don't need to send every keystroke
the inevitable-yet-unforseeable
messing with cellular automata (click to see at full size)
all rules of type (a, b, c, d) where an active cell stays active if a <= neighbours <= b, and an inactive cell becomes active if c <= neighbours <= d, split into 16x16 chunks for each rule
conway's game of life is at (2, 3, 3, 3) here:
it's cool to see that there are quite a few categories of possible worlds that can result from these parameters, though it would seem that most of them are not as interesting as cgol
i would like to learn gpu programming to make it go faster if that's possible, i think it would be possible since it's just convolution. it is very slow for 50 iterations at that size
i think this would also be a good use case for unit testing which i always neglect, but it would be so handy here if i were to try and optimise the code a little and need to check if it still produces the same output
a friend was showing me some cool explorations of cellular automata (like conway's game of life)
i had some ideas that i thought may have been novel and i assumed this was relatively unexplored but it turns out there is so much crazy stuff people have uncovered and way more interest than i had expected
which i actually find more motivating than i originally was worried it would prove to be
this site has so much interesting stuff but unfortunately the interactive database seems to be down
there also seems to be a thriving community and wiki here
it's crazy to me that so many people are still interested in it today when it was discovered 50 years ago
but still i am interested in looking at different game of life rulesets and automatically classifying the outcomes of certain rulesets (whether they converge to nothing or flickering, remain chaotic, or somewhere in between like conway's game of life)
the community seems more focused on discovering/creating creatures within conway's game of life (among lots of other things) which is less interesting to me
torrenting software feels good to use so i figured it wouldn't be so hard to make a performant/reliable distributed platform...
but it seems to be very hard actually from my experience trying to use this kind of stuff
i guess you don't notice the performance penalty as much for a long download compared to something that should be more responsive
sounds dumb but maybe the hardware infrastructure actually isn't there yet. to the point where every user can leave a server running overnight
i don't like when puzzle games have you copying the solution from some other place rather than making your own inferences
environmental puzzles are done well when they are still interesting to solve after you notice the gimmick
if the difficulty only comes from the fact that transcribing the solution is inconvenient then i find it especially tedious
i also don't like when you have to copy the solution from somewhere far away and it's unclear that the solution belongs there
seems like an intentional design choice that you can't kill your botany plant on demand to get a new one
that's not very zen of every other idle garden minigame ever made
i was listening to this podcast about cheating in chess and he mentions Laszlo Mero's model of class units for measuring game depth
i can't find the part about that in the book, and i'm not really sure how it's calculated :(
but it would be cool to use that kind of metric as part of a model to design a game
there could be rules constraining the types of game rules that could exist, like maybe it's a chess-like game with a fixed number of pieces with certain combinations of movement patterns
and then you could estimate the depth of a game by running simulations and measuring how much of an advantage a stronger computer has over a weaker computer
and use that to guide a search towards a game which has lots of interesting emergent strategy
chess is thousands of years old and went through a similar organic evolution process with different versions played in different cultures, and it's great, but i think we could probably invent a superior game
though, i'm thinking it would probably be a lot easier to begin in an area that is not chess, because it has a lot of complexity itself
a rudimentary minmax chess computer was one of the first things i ever tried to program as a kid and it was very hard! i never succeeded
maybe something like checkers where there's only one type of piece that does the same thing and few special rules
or even something more abstract like 'paper scissors rock' but adding more options or more hands or something
it would be nice for the optimisation process to have a way of detecting a 'dead game' that had trivial strategies to end in a forced win, but that is probably really hard to prove even for checkers
the nicest effect is that the cold feels colder. chemical winter
i have been learning the japanese hiragana and katakana. but kanji is too much to learn so i will stop when i am finished here. i like to transliterate japanese text even though i don't know its meaning, there's at least a little that can be understood
the app Kana Town and the DJT Kana Tool are simple but very helpful
it would be really awesome if the recent "misinformation" craze could turn a critical eye on the current state of advertising, but i guess that's just too big of a leap for the general public to make on their own...
if advertising provided value to society then surely ads would tend to get much better CTRs than they do in practice
in my view ads are a great example of a race to the bottom
some argue that advertising improves consumer information, but my view is that this is very marginal and it creates much more market friction due to anticompetitive effects of megacorporations spending billions on advertising
there's a threshold of messaging required to make a brand well-known enough that your customers will advertise for you, which explains huge marketing budgets
also the primary intended effect of advertising is not to provide mere information, it is to manipulate viewers into buying particular brands. usually it ends up causing individuals to make irrational purchases they otherwise wouldn't have
i believe that wide usage of adblocking software would improve everyone's lives by much more than just the reduction of inconvenience/annoyance/distraction
i view an individual's decision to serve ads on their website as a prisoner's dilemma betrayal made without effective communication
if uptake of adblockers was ever great enough advertising companies probably have a lot of ways they can make them harder to block. i wouldn't be surprised if they have a lot waiting in reserve
but i think that's an arms race we always win in the end once people acknowledge that it exists
i think that a lot of intended meaning (though perhaps it was not even intended to be captured by the audience) gets missed in movies because encoding themes is lossy
i guess like business intelligence for the general public
i remembered about Basis which i admit i still don't really understand at all after looking at again, and doesn't seem to overlap that much with my idea since it has a more grand focus on building a new economy rather than just improving the old one
looks a bit dead but i think it's a great concept
also remembered about PeerTube which obviously is decentralised and hosts a large amount of data
i guess federation is the middle ground in the tradeoff between reliability and decentralisation
i didn't see any federated wiki/databases aside from maybe Hubzilla which claims it does "everything". so i'm guessing it's like maybe a federated version of Discourse with a ton of bloated features...?
i dunno why not just make a smaller framework though, well it's Mozilla
when i google this i find a lot of scary sites that i have never seen before and i'm not sure when it was even started or how it's going. the website also seems slightly broken. but it's cool
i think just having the fediverse exist will be great once stuff starts getting bad with the major platforms when they try to tighten the screws on their seemingly captive userbase
i know network effects and all that are impossible to fight against right now but i believe that humanity will drift towards such solutions eventually if things don't go too horribly wrong
i believe that some sort of decentralised database system could solve a lot of problems that are caused by information asymmetry
like information about every product sold and its price history and user-submitted reviews
guess i'll learn more about cr*pto
though i can't really imagine an acceptable solution in my head
the data would be more than just transaction history so copying an entire blockchain would be infeasible
and if data isn't stored centrally or duplicated, accessing it becomes unreliable
seems like one of those uncheatable tradeoffs but maybe someone found a smart solution
or i could just make the data as small as possible but i think there should be photos
practically, the only good colours for this page: olive, maroon, teal, lightslategrey, steelblue
i think i will write more soon
and go through the backlog of old notes
also i should make a [post] tag similar to [img] for linking other posts when i get around to rendering posts to individual pages
#123 was actually wrong, the computer likes to exploit 1. e4 c5 2. Ne2 with 2... Nf6 giving black a -0.1 advantage
it seems like 2... Nf6 would normally be questionable if white played 2. Nf3, because of 3. e5 Nd5, but if the knight is on e2 black can play 3... Ng4 instead, threatening the pawn on e5:
after 2... Nf6 3. Nc3 the computer likes to play 3... d5 which it thinks is equal after trades
most of the players in the masters database don't seem to play this:
that's understandable given that there's hardly any downside to just playing the sicilian normally in that position (though it leaves white with a slight advantage as opposed to giving black a slight advantage)
this is not something that would get played often so i wonder if the people who do know about it were memorising every variation from a book
also worth pointing out that white doesn't play d4 most of the time because the point of 2. Ne2 is to play whatever dumb Keres opening, not just to transpose into open sicilian (though i think it's funny to do that)
i think it's pretty cool that a largely unexplored tactic would exist from a weird move on move 2
ok i need to improve the css that is not readable lol
i might also try to adapt some real-time rants from various online chats to here
and maybe at least make individual blog posts linkable to their own separate page (will require a bit of work) so i can send them to others
properly formatting posts and adding images/citations is... unlikely however :P
other possible topics:
pessimism (meh too heavy)
ad blocking
charities
tertiary education (degrees and academic research)
the shallow web
a suprising fact: when google claims it has "3.4 billion results" for your search term it is essentially lying
google will only display up to a few hundred results at most for ANY search term
and it continues to show an incorrect count until you reach the very last page
you can verify this for yourself by searching for any term, then clicking through the pages until perhaps 20/30 or so
or you can after clicking through once change the &count URL parameter to something like 800, and from there it will tell you the true result count
Sorry, Google does not serve more than 1000 results for any query. (You asked for results starting from 2000.)
if you set &count to higher than 1000, you get this error message
but usually it seems to be more like 100/200 before duplicates are shown (explained next)
In order to show you the most relevant results, we have omitted some entries very similar to the 172 already displayed.
If you like, you can repeat the search with the omitted results included.
there is also a link on the last page, you can usually click this to get perhaps up to 200 more results
in my experience these are actually duplicate results, so it's not giving you more info
a cynical observer such as myself wonders if this was introduced purely to create confusion regarding the overall result count and make the topic less communicable
like imagine someone starting a thread about this and then the first person to see it sees that message but doesn't click it and/or go to the last page AGAIN (because it sets you back to page 1 after you disable duplicates)
so if you're clicking and not changing the &count parameter it actually takes a long time and dedication to get to the "true" last page while duplicate results are enabled, i saw a conspiracy youtube video where a guy spends minutes going through this process
and some people might be turned away from explanation posts by an extra few sentences explaining this duplicate message
i feel bad faith social engineers might consider this kind of thing
perhaps it's more surprising that so few people seem to know about this, ironically i can't find much online
from what i can tell this got worse maybe around 2015 and it used to show a lot more results
i also have subjectively observed decreases in google search result quality some time in the past few years, it used to be easy to make specific queries for several words in quotes but now it hardly works
definitely felt like it got "dumber" at some point though don't remember when
i have been trying to pin down exactly what about it i find so unnerving
it does bother me that they try to hide it, and show a false number of results that are never actually accessible
it's definitely obscured intentionally, because i think most people would be at least a little bothered if they discovered this
rather than actually being directly concerning, i think this phenomenon is more illustrative of other problems google search creates that can be hard to intuitively grasp
it's like seeing with your own eyes the ocean cascading off the edge of the world, or escaping a mirror maze
practically, there might as well only be one page for every search, looking around for CTRs (click-through rates) suggests:
- about 25% of clicks are on the first result
- less than 1% of clicks are on results from the second page
if google showed every search result it would make no real difference to the number of times certain pages are accessed
we could consider the space of websites that are accessible through generic search terms like "world war 2", "restaurant in [area]", "giraffe" to be "the shallow web"
anything that requires searching the exact website name or a specific string in quotes would fall through the sieve of commonly searched for terms
it becomes impossible for a naive user to find any sites outside the shallow web without discovering website names from a non-google source
there is also nuance in this dichotomy because it can be hard to find even non-site specific terminology necessary to refine searches a lot of the time
the reality is that the majority of sites including many good ones will always be practically inaccessible without extremely specific search terms, and that sucks
i think seeing the sudden and surprising limit of just a few hundred google search results for a common search term shatters the (understandable) illusion a lot of people have that the web is infinitely large
most sites have to attract traffic through links from "human search engines" on centralised social media platforms
because of these immense network effects, such efforts can no longer be self-hosted or practically indexed by google
in the larger context of increasing centralisation on earth, i can only imagine the overall effect search engines have to be extremely anti-competitive
whichever company holds the #1 spot for search terms relevant to their niche is getting the most effective possible advertising (people who specifically searched for the term) for free, and is gaining more secret SEO points and market share every time someone clicks their page
this encourages companies to merge or operate under another organisational layer so they can occupy the top search result together, even for small businesses run by tradespeople
it's also concerning to me on observation how recent many search results seem to be, even for historical search terms, and the bias they show in favour of news sites
i have found news articles to usually be misleading and of low quality, yet they have extreme popularity because people want to know the news
there's also the question of the extent to which google manually and/or automatically blacklists/deboosts some results for political reasons (whether it's to insulate themselves or to advance their own political aims), i know that this happens but i haven't looked into it enough
i would have liked to run some sort of automated process to record this data and maybe plot some graphs, but i don't think it will be easy without using some 3rd party api
i have run into google's anti-scraping mechanisms before just by doing regular searches
i don't want to get my IP blacklisted either as it would inconvenience my life
and if we're talking alternative search engines, google has >80% market share, they clearly hold all the power here
the popular alternatives are all the same corporate garbage anyway so it wouldn't matter much if there was actual competition in this area
also i find it equal parts hilarious and disgusting to see bing slowly gaining market share after all the obnoxious attempts microsoft makes to funnel naive users into using it
an unfortunate page break but time will see it buried
I hereby christen the Unideology.
It has no precepts other than to rot here, forgotten and unmentioned forever.
May it fester as an artistic statement in ineffective opposition to modern ideological domination.
it's not obvious why any thing should be as it is