No, this isn’t a climate change post, although it would have been a perfectly fine headline for one. The Earth as a whole is getting hotter and people aren’t going to do enough to slow that down. But this post is about me moving to places that would have been hotter than where I moved from even if we weren’t on our way to massive ecological disasters.
In 2017 I moved from Oslo, Norway, to Boston, Massachusetts. That’s across the Atlantic, yes, but it is also almost 18° of latitude south. Since Oslo is at 60° N, that is almost a third of those latitudes closer to the equator. Norway is warmed by the gulf stream though, and Boston isn’t, so the differences aren’t too harsh. It’s almost more noticeable that it gets dark at night in summer. Almost.
Here’s a plot comparing Oslo and Boston temperatures by month with data from Wikipedia:
As you can see (unless you’re listening to this in a text-reader, in which case keep listening) there is a lot of overlap in the max-min range per month. Oslo’s max and mean temps are consistently below, and except for January so is the min temp. The overlap is the greatest for January through May, but at that point Oslo’s average daily maximum drops below Boston’s mean and for July – December Oslo’s average daily mean temperature is below Boston’s average minimum. Oslo is definitely colder, but contrary to what many expect, it’s not because of cold winters. It’s because Boston is too hot in summer.
In an ideal world I would have next moved to somewhere that split the difference between Oslo and Boston, but since I’m at the whim of my wife’s academic career, I now live in North Carolina. North Carolina does not split the difference between Oslo and Boston, instead it is even further south and even hotter. How hot? Well …
I live in Carrboro, a town that would be Chapel Hill if the US didn’t have hyperregressive local self-determination over local administrative boundaries. It’s another 7° latitude further south. If it had been 12° it would have been at 30° N and there would have been a beautiful geometric symmetry with Oslo’s 60°, but it isn’t, so there isn’t. Anyhow; Chapel Hill has temperature data on Wikipedia, Carrboro does not. But since the two are practically the same thing, here’s a plot comparing Boston and Chapel Hill:
Now North Carolina does get gulf stream effects, at least along the coast, but I have no idea if that matters for Chapel Hill. What we see here though is that there are some similarities to the Oslo-Boston comparison. … In that Chapel Hill is hotter than Boston. It’s mostly not like the Oslo Boston comparison though. In no month is the overlap as large, the differences are the biggest in January and February and even when there is a significant overlap the average daily temperature in Chapel hill is higher, or at least close to, the average daily high in Boston.
Currently it is April, but if I hadn’t been told already these graphs would have thoroughly convinced me that as hot as I found Boston, Chapel Hill is much hotter. … Yay …
Let’s round this off with the straight Oslo-Chapel Hill comparison so you can see how much hotter it is:
In May, Jun and July a hot day in Oslo is a well below average day in Chapel Hill, and for the rest of the year a warm day in Oslo is a cold day in Chapel Hill. And if you’re a delicate arctic flower like me, you should avoid June through August. It looks like a lot of days will be bearable, but on the other days I will sing praise to the refrigerating heat pump!
The plots in this post were created in R using ggplot. The code is available here: https://github.com/btuftin/Assorted/tree/master/Climate%20comparison
Donald Trump has just been defeated. This is undoubtedly good news. A second term would have been terrible both in policy outcomes and in psychological effect on both his fans and detractors. But once everyone is done waving actual American flags and shooting off fireworks and we start waiting for the electoral college to make Biden and Harris officially official president-elect and vice-president-elect there are important realities to acknowledge.
For one thing the blue wave was rather disappointing. Now there is some room for it to get more impressive as remaining races shift, but even with the motivation of the pandemic there was no massive repudiation of Trump and Trumpism, and unless the democrats can come from (slightly) behind and win two run-off elections in Georgia in January, the republicans will still have a majority in the Senate and while Collins et. al. will be deeply concerned with Mitch McConnell’s choices, they will vote down important Biden legislation anyway.
In two years it will be midterms though, and things could change. With Trump, hopefully, a fading memory by then, in the long run, a bigger issue is that Fox News still exists. In four years of being the worst president in the history of the US Donald Trump never fell below 36% approval in 538’s aggregate of polls, and for the last two years he stayed above 40% among likely voters, much thanks to right wing media functioning as a de facto propaganda machine for the administration.
While the Washington Post documented Trump making an average of 16.9 false Fox News, among others, worked hard to twist reality so that his lies would instead be either “actually the truth liberals don’t want to hear”, “hyperbole, everyone uses that” or “a joke, ha ha”. Similar approaches were applied to other types of Trump related scandals and when they couldn’t use any of those approaches they ignored the story completely and ran a story about pandas or something instead.
Now Fox News isn’t the only problem. Less biased mainstream media employs opinion writers and talking heads with just as twisted views on reality, and invite other contributors more twisted than that again in the interest of “balance”. And there’s Sinclair Media, which feeds similar crap to thousands of “local” TV stations. But Fox News stands out because of their size, it’s the most watched cable network and because of their symbiotic relationship with the GOP and, for most of the last four years, the president.
Fox News also show, in my opinion, that going forward anyone worried about Trumpism need to go all in on committing to make more lifestyle choices based on their politics, as well as engaging in political activism. Yes, it’s important to increase focus on local elections and support grass root movements such as those who helped turn Georgia blue this year, and might also get us a 51-50 senate with VP Harris breaking ties. But money and propaganda still plays an oversize role in US politics and even progressive politicians struggle with following through on changing that.
What has been shown to work though is voting with ones wallet. Relatively short lived campaigns have hurt Fox News and, at least temporarily, influenced their programming choices and the actions of advertisers. More concerted efforts, sustained over time, could do more. It is nice to think the next election cycle will make it possible to enact more progressive policies, but a lot of voters have short memories and a view of themselves as future millionaires and the people who are millionaires only too happy to funnel some of those millions into promoting politicians who have little to offer beyond “lower taxes!” But their millions are contingent on everyone continuing to buy their goods and services, and redirecting our purchasing power where the least of it flows into promoting regressive politics has a proven track record.
It also has the potential of creating improved conditions for workers both domestically and abroad, even without new laws and regulations, as employers who exploit their employees are often the same who donate big to the rottenest politicians. So it’s really win-win, but it requires everyone to pay attention, even when it’s not an election year, and even as “What the fuck the the president do today?” goes back to being a rare event.
A naive (or corrupt) judge may have ruled that no reasonable viewer takes what Tucker Carlson spews is factual, but would take losing a lot more revenue for Fox News to stop broadcasting him to his obviously unreasonable audience. But even if they never do, boycotting Fox’s advertisers takes money out of pockets that donate to regressive politicians. Again, win-win.
Just remember that electing Biden was a battle won, but this isn’t a war. For one thing, few Trump voters are actual enemies, and unlike a war, in politics there is never a final cessation of hostility. But like in a war we could get far by strangling the supply lines.
The headline is from the life story of my great-great grandfather’s sister Thora and is all she included about her aunt Olava. Most of Thora’s other aunts and uncles got short shrift, but this terse mention of something so dramatic still resonated with me.
When Thora’s parents immigrated to the US at age 13 with her and one brother, they followed in the footsteps of family members. Some had left Norway before Thora was born, including an uncle, an aunt and two brothers. The late-comers settled near some of their relatives, but others had already moved on and in those days not everyone just popped over to the next state (or Canada) to see their relatives. So when Thora wrote about her life decades later she gives a good account of her siblings, her parents and her grandparents, especially those that lived near by, but her recollection of her aunts and uncles is somewhat sketchy. When listing her paternal grandparents’ children two of the names are wrong and she rounds things off with:
There were more, but I did not know them. One daughter + family burnt up in Hinkley fire.Thora Miller’s story
When I got round to research this there was one clear candidate for the fire victim, her aunt Olava. And “Hinkley fire” had to be The Great Hinckley Fire, a catastrophic forest fire in Minnesota in 1894. If Olava did die in this fire it would have been when Thora was three months old and still living in Norway, so it was understandable Thora would know little of her. But the fire burned to town of Hinckley to the ground, taking with it a lot of records, so it was still disappointing she didn’t happen to know a little more.
The Great Hinckley Fire
In the late 19th century Hinckley was a booming logging town in aptly named Pine County, Minnesota, with a population of over a thousand. Nearby communities brought the population of the region up into the low thousands, many of them Swedes and Norwegians. Logging was happening all around and whole areas were stripped of trees, leaving behind branches, bark and sawdust to dry as kindling in a hot and dry Minnesota summer.
Forest fires happened all the time in an age when open flame was still a much used tool for staying warm at night, cooking meals and getting rid of debris, even for people living in a dry forest. At the end of August 1894 fires had been burning for weeks not far outside Hinckley and neighboring hamlets, but even in the dry summer they had not posed much threat.
The biggest forest fire disaster in the US had killed over a 1000 in Peshtigo 20 years earlier, but had been somewhat forgotten as it happened on the same day as the Great Chicago Fire, which killed far fewer. But most forest fires moved slowly and burned themselves out. Even in a dry summer.
This all changed on Saturday September 1st when atmospheric conditions conspired to make a firestorm that swept through the towns and burned several hundred square kilometers in a few afternoon hours. The most detailed account I’ve seen of the aftermath is a booklet titled “Eld-cyklonen” (The fire cyclone), originally published in 1894 in Swedish, collecting horrifying stories from survivors, an opining on causes and irresponsible forestry practices. An English translation is available in pdf-form at the Minnesota Historical Society and a 1976 reprint of the translation is available on Amazon.
Many of the stories are truly horrifying and I’m not going to include them here, but the booklet is proof of both how much this was news in the day, and also about how sparse the sources are, as it’s not even certain who the author of this booklet was.
The fire completely destroyed Hinckley as well as six nearby settlements and the official count of the dead eventually rose to just over 400, not all of them identified and not including Native Americans and others living in the forest outside the registered settlements.
The following Monday edition of the New York Times reported the news thus:
HUNDREDS PERISH IN FOREST FIRES ———————————————————–The New York Times, Monday September 3rd 1894
Western Towns Destroyed and Citizens Burned to
Death in Their Crumbling Houses
TERRIBLE SCENES OF SUFFERING AT HINCKLEY
It goes on to list many of the dead, but both this list and updated ones later suffer one big problem for my purposes, they list all the adult women as “wife”.
To determine with more certainty if Thora was right about her aunt and her family dying in the fire I needed to find out more about Olava’s family. If I had the name of her husband and children I could perhaps perhaps determine which of the “SURNAME, wife of X”, was her, if any.
I take my genealogy hobby seriously, so I was going to do that anyway, but with Norwegian women who emigrated on their own it is not always easy to find them in American sources. The lists of dead at Hinckley aren’t the only ones skipping wives’ names, and marriage is even better at wiping out recognizable surnames than the name changes that happened when Norwegian traditions already in flux encountered a society completely baffled by true patronymics.
So I started back in Norway. Olava was born February 15th 1858 in Ådal, Buskerud, Norway at a small tenant form called Odda (The Point) on the Ådal river. Her parents were Knut Torstensen, often called Odda, occasionally Tuftin, and Inger Torine Gulbrandsdatter, and she had five older siblings at the time of her birth.
The closest sibling in age to her was her brother Paul, two years older and one of the first in the family to emigrate in 1878. When Olava emigrates in 1881 her destination is given in Norwegian records as Bradhead, probably Brodhead in southern Wisconsin and presumably to meet her brother.
My search then lead to a marriage in 1883 in Wisconsin where the bride was an Olava Knudsen and luckily for me the record included the names of the parents, horribly transcribed as Kneed Thorstenson and Inge Guth.
Still this match to both parents’ names established that this was highly likely to be the right Olava Knudson. So now I knew that she married Lars A. Wold, son of Anders Wold and Kari Larson on April 11 1883. This then got me three records for children born in Chippewa Falls, Wisconsin. Alfred, born in May of 1884, Ida born in June of 1886 and Christian born in February 1890. For all of these the father is listed with the anglicized name of Louis Wold or similar.
With the whole family known I could go back to the lists of dead. The most complete ones are transcribed from “Memorials of the Minnesota Forest Fires in The Year 1894” and include Louis Wold, age 44, who burned to death in the swamp one-half mile north of Hinckley a place where many victims had sought refuge in vain in wetter country. The record says he was identified by John Pearson and was buried in Hinckley.
Along with Louis are listed his family, presumed dead in the same location but not identified:
- Wold, Mrs. L. – age 35
- Wold, Alfred – age 12
- Wold, Ida – age 11
- Wold, Christ – age 6
- Wold, baby – age ca. 1
- Wold, Louis Sr. – age 72, father of Louis
Considering the names and ages were given by a neighbor the matches are too close to be a coincidence, so here we reach the end of the story. However terse and gruesome Thora’s note, it was absolutely correct, one aunt and her family had indeed died in Hinckley.
I find these “dead ends” in the family tree to be especially rewarding to research. Perhaps part of it is a perverse enjoyment of actually finishing something. All other genealogy research “just” leads to yet more cousins to keep track of. But mostly it is that these are stories that often get less attention as people build their family trees, focus on their ancestors and forget almost everything about that “one daughter + family [who] burnt up in Hinkley fire.“
A sort of PS
Some time later I found another reference to Hinckley in my research. Hans Kristian Thorstensen Langbråten, Olava’s nephew and Thora’s cousin, emigrated with his wife Kari in December 1893 with destination given as Hinckley. Presumably they were going to join his aunt, but luckily for them they were not living there 9 months later, or if they were they made it out alive. Christ Britton (as he started calling himself) and Carrie had at least 10 children, the first half dozen or so born in Duluth, but eventually they moved back to Pine country and the rebuilt settlement of Sandstone.
I’m currently (re)learning computer programming, and in trying to come up with a project for an application I started thinking about colors in general and computer colors in the specific. This post is going to be an unnecessarily long and detailed ramble about those topics.
Let me first get the promotion out of the way. I spent a lot of time writing this toy program, so I want to get as much mileage out of it as I can. You can always skip ahead if you just want to read about colors and I won’t bother you about applications and programming until towards the end.
Screenshot of my application.
ComputerRainbow lets you play with generating spectral colors with varying color resolution. The size of each color block is customizable, as is what color you are centered on if there isn’t room for all the colors in the image.
The gradient from “just a few colors” to “hundreds (or thousands) of colors” makes cool patterns. A windows installation file is available here. Or the source code can be downloaded from GitHub and you can run it on any computer you can install Python and PyQt5 on. PyQt5 is the GUI programming package I used. Unfortunately the sliders appear to be a bit wonky on OSX, but the program will still make pretty pictures.
“All the colors of the rainbow” is a common phrase, sometimes used as a flowery way of saying “all the colors”, which is not strictly true. The rainbow is also often used to teach children “the colors”, where “the colors” are perceived to be some inherently correct way to split up the rainbow, which is also strictly hogwash. It seems really true, because to some extent, naming the colors influences our perception of them.
Have a look at the rainbow in the ComputerRainbow screenshot above. Start at the bottom and pretend you are Isaac Newton doing a scientific analysis of the spectrum and your decision on how to split the spectrum up and which parts to give separate names will be considered gospel for centuries.
Now this generated spectrum isn’t perfect, but it should be fairly close to the real thing, so how many colors did you get? What would you call them? Would it be something like Red, Orange, Yellow, Green, Blue, Indigo, Violet? If so you are in good company. That’s where Newton landed after adding Orange and Indigo. In part because his vision wasn’t all that great, and in part because he really, really wanted there to be seven, closet mysticist that he was.
But if that’s what you ended up with, which name goes with which part? Or perhaps you didn’t go with ROYGBIV at all, so let me just get to the point. In modern English, and in other Germanic and Romance languages, the bright band above green, that Newton labelled “blue” would more likely be called cyan or light blue, and Newton’s “indigo” would be the “real” blue. And in his book on the amazing work he did discovering the nature of white light and colors (Opticks or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light) he shows only five subdivisions of the specter for most of the early chapters, until he starts drawing parallels between colors and musical notes and really needs seven.
Now if Newton had been Russian, he would likely have called the cyan “Голубой” (the color of the sky) and the blue “Синий” (the color of a cornflower). I say that because that is how the Wikipedia page on the Rainbow split it up in Russian. And in Russian those are distinct basic colors in much the same sense Blue and Green are in English. There is no “sky blue” in Russian, because the sky is Голубой. Other languages have other subdivision, often grouping blues and greens differently.
Diving much deeper than that requires an expertise I do not have and a willingness to take sides in an ancient battle between factions of linguists and anthropologists. But it is fair to say that ROYGBIV isn’t a scientific fact, it’s a cultural one.
Okay, it’s partially a cultural one, but there is some biology involved. We have three main types of color sensing cells, often simplified to red, green and blue but more correctly called L, M and S (for long, medium and short wavelength-sensitivity).
The “red” cones have a peak sensitivity in the yellow band of the spectrum, but are often called “red” because they’re the cells mainly responsible for sensing that color, although M-cones have some sensitivity all the way across red.
A small percentage of people have well studied mutations causing anomalous color vision with by far the most common one causing faulty M cones and an inability to distinguish red and green. Because this mutation is on the X-chromosome it is men who are most likely to suffer from this. Most women with the mutation will have a “good” copy on their other X to compensate.
The least common of these “classic” aberations is tritanomaly, having one faulty S gene and a reduced ability to distinguish green and blue hues. But the most spectacular one might be having an anomalous L gene that combined with the regular one give some women four types of cone and thus supervision.
Other mammals generally only have two types of cones, while amphibians, reptiles and birds often have three or four, with additional features so that pigeons most likely have five types of color sensor.
If the light is poor, our other light sensors, the rods, come into play in color perception. We only have one type of those, but it also has a peak sensitivity. This is somewhere between the S and M cones, but rods are 100 times more sensitive than cones, so it doesn’t matter much for how we perceive color unless the light is very low. In low light red neither stimulates the L cones or the rods much, so it becomes black.
But let’s circle back to my statement about the rainbow not containing all the colors. The rainbow has all the monochromatic colors, the colors with just one wavelength, but we can combine multiple wavelengths as well. If we combine all of them, we get white. If we combine red and green we cannot distinguish that from monochromatic yellow. (See if you can figure out why from the response graphs.) And if we combine blue and red we stimulate just the S and L cones in a way no monochromatic light can and get purple, which just isn’t in the rainbow.
Fascinatingly it does seem to us to both akin to the farthest end of the blue spectrum, violet (which is why that is labelled purple on the illustration of Newton’s ideas), and to red, which is why color theory early on glued the two ends together with purple in between and made the color wheel, but the mix of very blue and very red is not on in the rainbow. There is also no pink in the rainbow and no brown, but then again brown doesn’t exist so that is no surprise.
Computers inherited their ability to show images in color from color television and the use of a dense array of Red, Green and Blue phosphors (RGB-color). This causes some limitations in what colors can be displayed, because although it’s a great choice for being able to create a large variety of colors, it just can’t do everything. (Which is why Sharp tried to innovate with a RGBY-panel, which flopped because no one was really missing “super clear yellows” and because no one was recording in RGBY, so that channel had to be generated in the TV.)
What distinguished computers from the TVs of the day though was that a computer had to treat color as digital, whereas TV-signals had analog color. So the computer folk had to decide how many colors to use. IBM decided on 16 and being the biggest in the business their CGA became the standard. 4 of those “colors” were black, white and two shades of grey though, so really there were only 12. Oh, and you had to chose which four you were going to be using. (More detail here.)
Programmers could cheat though. The colors were created by tiny red, green and blue dots in the first place, so if you had an area of those trios be a mix of the colors you had available, it would appear as a different one. Early 1980s computer colors were extremely limited, and programmers could only do so much. But then again early 1980s computers were extremely limited in general and programmers were extremely creative in dealing with all the different limitations, not just color.
By the end of the 80s computers could display tens of thousands of colors, so I’m not going to dwell on that aspect long. Instead I’m going to write about the difficulties of creating a realistic rainbow from RGB. (Although it wasn’t all that difficult for me. I just found someone who had already written the necessary code.)
If you are mathematically inclined you may have noticed that the colors of the spectrum exist in just two dimensions. There’s a wavelength and an intensity. Computer colors on the other hand are three dimensional, as is our main color perception. There is an intensity of emitted R or L response, and intensity of G or M response and ditto for BS. These don’t correspond directly though, because G will stimulate both M and L, but they have the same dimension.
So RGB color space should be thought of as three dimensional. In the illustration here the primary colors increase along each axis. The three hidden edges are each a pure primary color and go from black, at the hidden corner, to the brightest red, blue and green at three of the corners we can see.
Finding the hues of the rainbow on this cube is not too difficult, they go around the edges of this flat rendering. But the edges don’t have a constant brightness. The red, green and blue corners are both created with just the full illumination of the red, green and blue phosphor, while the yellow, cyan and purple corners all have two phosphors contributing their full illumination. So a good rainbow needs to form something like a circle inside the cube.
There’s more to it than that, so I’m glad I could just use someone else’s hard work there. But if you are super interested, here’s a page that go into much more detail: Rendering Spectra
My Application – revisited
That is all I had to ramble about pretty much, but I’ll briefly go back to the application. When I started I had ambition for something slightly different and more artistic, but my vision turned out to not quite correspond to reality, so I downscaled a little. But I kept some aspects, slightly transformed, like the ability to change the bit resolution.
Now if you try this, you will notice that 12 bits is still pretty good, but 6 bits is awful in more ways than one. Part of the reason for that is that if you are going to have only 64 colors you need to be more thoughtful about which colors to use in the rainbow than I was. In fact I wasn’t thoughtful at all, I’m just rounding the colors from my wavelength converter off to the nearest 6 bit version. Then again, it wouldn’t look very good even with a lot of work. Aren’t you glad graphics development didn’t stop at the end of the 80s?
NB! This post is about an obvious and egregious mistake made by the New York Times, but differences in population numbers are not the only factors that make these statistics non-comparable between countries, there is also:
- differences in age distribution (older vs younger; differences in baseline comorbidities)
- differences in availability of tests
- differences in eligibility for testing
- differences in tests used (tests may vary in sensitivity and specificity)
so DO NOT leave this post thinking you now have any sort of idea about the differences in severity between countries.
The New York Times today has an article titled Which Country Has Flattened
the Curve for the Coronavirus? and they do show two countries that clearly have, China and South Korea. But they then go on to compare countries that haven’t yet Flattened the Curve and that’s where they royally fuck up.
First they present six countries where “the number of known coronavirus cases is still growing rapidly“. They are Italy, Spain, Iran, France, United States and Germany. The only criterion to be included appears to be “more than 4,000 new cases in the past week“.
Then they present six countries that “have had less severe outbreaks so far“. They are Switzerland, UK, Netherlands, Austria, Belgium and Norway. And again, the number of cases in the past week seem to be the only measure considered. This time it being above 1000, but of course less than 4000.
Do you spot any differences between those two groups? Like perhaps how, the UK excluded, the second group has populations that are less than half of those for the first group? That can’t possibly have an influence on the number of cases, can it?
Comparing statistics between countries is, as mentioned in the initial warning, difficult, but as a minimum, you have to compare per capita numbers, or, you know, don’t write “these numbers are worse than these other numbers” when anyone with half a brain would know that your conclusion after learning the US had 7,273 new cases last week and Norway had 1,191, absolutely should not be “that looks very bad for the US“.
Here’s an overview for these six countries that includes rough population numbers from Wikipedia and a “per 1000 population” statistic helpfully colored for a useless statistic that you should use for nothing except to go “Wow! That’s a bad mistake in the New York Times!”
per 1000s pop
Data from NYT and Wikipedia. Table produced using formattable in R.
Again if you are now thinking the US and the UK are doing fine, read the first paragraph over and over until your eyes bleed. But do go away with a commitment never to accept comparisons data between countries that do not at least acknowledge that the per capita measure would tell a different story.
Have you ever observed and thought about the steps along the way as a child figures out how to do a jigsaw puzzle? They start out mostly just chewing on the pieces, but soon they develop a desire to get to the completed puzzle. Sometimes they need a little help. There’s a stage where they have mastered figuring out in what spot a piece belongs, but not the art of rotating it so the image, and more importantly the shape, fits.
Later comes more abstract wisdom, such as beginning with edges and/or easily identifiable parts of the image and looking for candidate pieces and checking each in turn. Along the way they may ask for help, or need help, but they also need to struggle and try for themselves. If we always lend them a hand when a piece doesn’t immediately slot into place we either slow down the progress of their learning and/or teach them the trick of getting us to do the puzzle for them. But if we never do they are unlikely to bother with puzzles at all.
I recently had the thought that acting as an instructor (a teacher, lecturer, tutor, etc.) could be modeled in many ways as helping a child who is figuring out how to solve a puzzle. There’s the need to give the student the right level of challenge and not start them out with a 500 piece puzzles of the Andromeda galaxy; the need to let them struggle at times; and the need to find the right level of intervention, a word of encouragement, a Socratic question, a suggestion, or even modelling the right approach.
The challenge is though that there is no physical puzzle to observe, no easily definable shared reality to work with. The completed parts of the puzzle are in the student’s brain and it takes time to understand what is there already, what pieces are missing here and there in the interior, and, even more important and time consuming, what shape the puzzle has at the edges.
And the puzzle piece you are trying to add isn’t physical either. Say it is teaching someone about adjectives. That seems straightforward, but how much a student is ready to learn will vary. Do they know anything about sentence analysis? What a subject and object is? And what is the goal? Pass the next test? Gain an understanding they can use to move on to the next piece of that puzzle?
The child cares about completing their puzzle and seeing the final picture. The student may not care about adding this puzzle piece to their, or about completing previous parts that you can tell are missing based on their misunderstandings and errors.
And beyond that it is not just about this one piece. Even more nebulous, but in the end more important, is teaching independence. The end goal, even though it may be years in the making, is for a student to be capable of progressing “on their own”, to analyze their own currently state of knowledge, to seek out resources at the right level to fit a new piece to their puzzle.
Is this a useful model? Maybe it is for some and not for others. And the lesson to take away might differ too. Some instructors need to let students struggle more, some need to get better at choosing the right level of challenge. And some need to nod in recognition when I sign off with: Sometimes you can only do so much with the time, energy, students and learning environment you are in.
I’m an opinionated person, and one opinion I hold is that we should have freedom of thought, including opinions. Some opinions are very stupid though, and the people who hold them should be forced to confront that for real, not just by self-appointed twitter thought enforcers.
Still, we can sometimes learn something from examining stupid opinions, whether it be something about human psychology and our tendency to rationalize rather than reason, or that Gödel’s incompleteness theorem can be extended to opinions as well as math. No opinion can be built on reason alone, there is always some value judgements involved. And once in a blue moon we can combine several stupid opinions and get one good one, as I will do in this post.
Stupid opinion one: We need population reduction to solve X! There’s nothing inherently wrong in being of the opinion that, based on some set of values, humanity, or some subset thereof, would be better off if there were fewer of us, but this is usually a stupid opinion for the following reasons:
- It’s often not an honestly held opinion, just an argument against using any other means to solve X. E.g. There’s no point in limiting oil extraction, what we really need is fewer people.
- It’s often not accompanied by a willingness to discuss what means are acceptable. E.g. Of course I don’t mean genocide, you can use whatever means you want, but until you do, keep your hands off my unnecessarily large pickup truck.
- It’s often an abdication of responsibility. E.g. The problem isn’t me/us, it’s that there are too many others.
Stupid opinion two: Men! Again, there’s nothing inherently wrong with all opinions about men, but it is a topic with an exceptionally poor bad/good ratio, and the chief issue appears to be over-generalization.
I can hear the cries of Yes, Not All Men! and Not All Men? Really? all the way from here where I’m not even done writing this sentence. (Well, not from you of course, you’re withholding judgement, just barely. Good for you, have a cookie!) And I’m actually going to poop on the former and hand it to the latter (Woo, more cries!). The fact is over-generalization is a lot less common among people who write variations on the theme All men are trash than among those who pipe up with Not All Men in response to any criticism of more or less common male behavior.
While there are All men are trash opinion-holders who exclude an infinitesimal number of men from their generalizations, most do in fact not include all men in All men, but use it as a justified shorthand for This behavior is so widespread, and so rarely criticized by other men, that a solid majority of men bear at least some responsibility for it, and also it’s a lot less satisfying to tweet “Got cat-called again today! Jeez! All men are trash!” than “Got cat-called again today! Jeez! Those specific men are trash!”
Not All Men sayers on the other hand, although of course not all bad, some are just ignorant, willfully or not, are extremely likely to hold strong opinions on what is manly, in an overly generalizing and oppressive way. E.g Not all men are harassers, in fact it’s manly to want to protect women, not harass them. or Not all men are harassers, and besides cat-calling is a compliment and it’s just a male trait to chase skirts!
And at the root of this problem (other than the fact that these men are trash) lies the male-female imbalance in how relationships, long and short, form in our culture, with the bulk of the responsibility for action lying on the males. Of course both socially inept males and females end up in relationships in fairly high numbers, but it’s more of an issue for males, and they also whine about it a lot more.
The solution! Fewer men! Fewer women is of course a more efficient way to reduce population, since theoretically each man can have a lot more offspring than each woman, but as we see in China, a surplus of men is just not good, and if most people maintain their norms for what kind of relationships they want to have children in, fewer men also means fewer children.
It also has the potential to solve a lot of the issues of gender discrimination that decade upon decade of fighting for equality hasn’t been able to wipe away. Combating male dominance just becomes much easier if there just happens to be fewer males.
So I modestly suggest we make gender selection technology more accessible, and create incentives to pick girls if the fact that “boys are gross” isn’t enough of one, and in a few decades there will be no more issues with overrepresentation of men without all those pesky efforts to reduce chauvinism that hurt men’s sensitive feelings. And we’ll be well on our way to a reduced population with all the benefits that entails. I can see no way this could go wrong or reason it shouldn’t be policy immediately.
There are no other parts 2, but how could I not. In the previous post I gave an extensive introduction to hypercubes at a fairly low level. I was going to talk about how we can think about and partially understand what goes with hypercubes by using coordinates, but although I referenced them a few times, by the time I got to the actual hypercubes I’d stopped. In this one though, I will.
But first I want to share two links:
This one I also linked from the previous post, it has some good stuff, but it’s a bit more hardcore math than mine.
The next one got from the smart people at The Straight Dope Message Board when I asked if anyone could find a good video showing what a straight cut through a hypercube gives us, but it also goes through a lot of stuff about hypercubes in detail. And it’s from 1978! The Hypercube: projections and slicing
Okay, so what I’m really going to focus on in this post is …
Surface and volume
In this first link above, the people at physics insights define two terms that are very handy when trying to understand how objects in different dimension relate to each other. One is n-volume and the other is hyperface. I’m going to explain both of them and justify their use by looking at the coordinates of the points involved. I’m going to refer to hyperfaces as surfaces, or n-surfaces, though, since this isn’t a high level math text and that just makes more intuitive sense to me. And this time we’re going to start with the cube and work back to a square before we jump to hyperspace.
We start with the cube because it’s the most real in our universe. Truly 2D squares don’t exist, though we can still grasp their properties since they are a part of 3D space. But we’re not going to work with a real cube, we’re going to work with the mathematical one from the previous post.
Okay, so this cube, like all cubes, has eight vertices with coordinates:
- A: (0, 0, 0)
- B: (1, 0, 0)
- C: (1, 1, 0)
- D: (0, 1, 0)
- E: (0, 0, 1)
- F: (1, 0, 1)
- G: (1, 1, 1)
- H: (0, 1, 1)
It has 16 edges, AB, AE, AD, BC, BF and so on. Looking at the coordinates we see the end points for each of the edges share two out of three coordinates. The end points of AB are (0, 0, 0) and (1, 0, 0), the end points of EH are (0, 0, 1) and (0, 1, 1). The remaining coordinate is either 0 or 1, if we pick a value between 0 and 1 instead, we get a point on the edge, if we pick one larger than 1 or smaller than 0 we are outside the cube entirely.
The cube has six faces, ABCD, ABFE, BFGC and so on. If we look at the four points that define the four edges that define one face we see that there is one common coordinate. A, B, C and D all have z-coordinate 0. B, F, G and C all have x-coordinate 1. As with the edges, the faces follow the rule that if one coordinate stays 1 or 0, and the others are between 0 and 1, we are on the face. We can define the whole surface of the cube this way.
We can then use this to define what is inside the cube and what is outside. Any point where no coordinate is less than zero and no coordinate is higher than zero is inside the cube, making up the volume.
Got that? Great, let’s jump back to 2D then.
Any point in this plane that has two coordinates that are both in the range [0, 1] (that’s 0 to 1, end points included) make up the substance of the square, which we have defined as the volume of a shape. I know we usually call it the area for a 2D surface, but we’re dimension-jumping now, so since it follows the same rules as 3-volume, we shall call it 2-volume rather than area.
Ditto surface. If the coordinates are in the range [0, 1] and at least one of them is 0 or 1, we’re on the surface of the shape. For the 2D square it’s a 2-surface. (The pros would say that a side, e.g. AB, is a hyperface of the square.)
If we’re outside the square and want to get in, we have to pass the surface at some point, but it can be any point.
The same applies to the cube. If we’re outside and want to get in, we have to pass the surface, but for the cube the surface is what we usually call a surface. And if all this seems obvious we are ready for the hypercube!
Okay! So, in 4 dimensions, we have a 4-volume that is all the points that have all coordinates in the range [0, 1]. And we have a surface that is all the points that fulfill that restriction plus at least one coordinate is 0 or 1. But that means one coordinate is always the same, so we can sort of ignore that, and three of them range from 0 to 1. And what kind of shape is it that has three coordinates ranging from 0 to 1? A cube! The surface of a 4-volume is a 3-surface made up of cubes, with coordinates of one of these forms, with x, y, z and w in the range [0, 1]:
- (0, y, z, w)
- (1, y, z, w)
- (x, 0, z, w)
- (x, 1, z, w)
- (x, y, 0, w)
- (x, y, 1, w)
- (x, y, z, 0)
- (x, y, z, 1)
Eight cubes! Which should surprise no one at this point.
Isn’t this pretty much what I wrote in the previous post? Yes, yes it is. But with the added mathematicalness of coordinates all the way, and an opportunity to talk about entering and exiting the hypercube some more. We can get from outside the hypercube to inside the hypercube through any point in one of the surfaces, and the surfaces are cubes. (0.5, 0.5, 0.99, 0.5) is in the middle of the hypercube along most axes, but close to the edge along the z axis. (0.5, 0.5, 1, 0.5) is on the surface of the hypercube, or more precisely it’s in the center of the top cube in our projection. And (0.5, 0.5, 1.01, 0.5) is another small step along that same straight line, but now it’s outside the hypercube. (In a way we can’t really meaningfully illustrate in 3D.)
Mind sufficiently blown, or perhaps you’ve just given up on understanding what I’m babbling about? Well give me one last chance. I’m going to take this latest example and translate it to a 4D space we are actually familiar with, namely space-time.
There’s still going to be room for headaches though, as we need to be able to understand what a unit hypercube is in 4D space time. It’s a 1, 1, 1 cube, that exists for 1. You can pick your units, but your unit of time has to be the same length as your units for space … It’s a cube that pops into existence, exists for a bit without moving relative to our space coordinates, and then stops existing.
Now if we use space-time coordinates (x, y, z, t) and pick three space coordinates, say (0.5, 0.5, 0.5, t) if we put a dot there and let t vary we get a line. It looks like a point to us, but it exists through time, so it’s a line. Now at what we’d designated as time 0 we let our unit cube pop into existence. Our point is in the middle of the 3-volume of the cube, without having gone through any of the 2-surfaces, but since its coordinates are (0.5, 0.5, 0.5, 0) it is on the 3-surface of the hypercube.
The cube and the point/line keeps existing, making the cube trace out a hypercube, and then when the point is at (0.5, 0.5, 0.5, 1) the cube pops back out of existence, and the point/line exits the hypercube.
Anyone who explores geometry beyond the high school level, including adventurous high schoolers, and start wondering about and researching the sequence 1D, 2D, 3D, 4D … (WTF! OMG! LOL!) learns about Hypercubes, one of the most basic 4D “shapes”. Since humans cannot perceive of 4D objects, being sadly bound to the three spacial coordinates, this usually leads to a small headache and a often a desire not to be troubled with such thoughts again.
A very small number go on to study higher geometries in university, which is some really mind bending sh-t, or so I expect, as I haven’t done it myself. And a still small, but larger number are somewhere in between. Not quite willing to commit to spending money on a degree in the stuff, but still curious. This text is for the particular segment in that group who have just been waiting for a good, but basic, explanation.
(Technically Hypercubes can be 5D, 6D, and so on, and the 4D version is more precisely called a 4-cube or tesseract. But higher dimensions than the fourth are even more ineffable, so we’re fine using hypercube and meaning the 4D version.)
1D and 2D
To get beyond the Wow weird!-stage and get some proper understanding of the challenges we face dealing with 4D space we need to establish some basic rules and a common understanding of what goes on in lower dimensions. I’m going to use both images and coordinate references, but as I was working through this I didn’t end up using coordinates much for the actual hypercube stuff, so you’re safe to skip it. I am however planning a followup post, as I’m not done exploring this space (wink, wink, nod, nod) and the followup will involve coordinates and equations.
A 1D object is a sort of “perfect” string. It has no thickness, only length. A high-wire acrobat does most of their work in a 1D world and you can describe their position with just one coordinate. If you set 0 at one of the platforms, then the position of the acrobat is perfectly defined by how far they are from that platform and to get from the point 2 meters from the platform to the point 3 meters from the platform, they have to pass through all the points on the line/wire in between.
If there is a barrier on the wire, for instance another acrobat, no one can get past, unless one or both acrobats are ghosts who can pass through each other, or, more relevant in this context, they jump and thus move through another dimension, cheating the rule that you have to pass through all points on the line.
One way to think of and define a two dimensional shape is to start with a one dimensional one, say a line of length one, and move it through a second dimension. If in the 1D world the string had endpoints (0) and (1), we add a coordinate to place it in a 2D world (0, 0) and (1, 0), and then we move the line at ninety degrees to itself so the endpoints get to (0, 1) (1, 1), and connect the new endpoints to the point where they started. This gives us a square.
This might seem like a cumbersome way to think of a square, but it’s going to come in handy later, so look at this illustration of a square and try imagining that a copy of the line from A to B moves upwards until A is at D and B at C, and that the infinite positions the line has on the way define the square ABCD.
In addition to the original line AB and the new line DC, the square is bound by BC and AB. One side of each of this line is inside the square, and the other side of the line is outside the shape. That inside and outside are, however, 2D properties of the line. In 1D the inside and outside would be points on the line in contrast with the segments on the extensions of the line beyond the end points.
Like the perfect 1D line, the perfect 2D square doesn’t exist in reality, but it’s slightly easier to imagine one. If I draw it on a piece of paper for instance and then draw a line from A to C, I cannot draw a second line from B to D inside the square, without crossing the first one. I can lift the pencil, but then I’m no longer inside the 2D square. Even if we may intuitively consider it a sort of fence, the 2D shape is restricted to just the surface of the paper. A point above it is not inside it.
If we make a straight cut through the square, the cut forms a 1D line. A cut through the 1D line makes a zero dimensional point, which can give you a headache all by itself, but we’re going to move the other way, and the nature of cuts becomes one way we can examine the strange nature of geometries beyond our own.
You can probably envision it yourself, but here is an image of a straight cut through the square. We can make infinitely many straight cuts by picking two random points not on the same side, and connecting them with a straight line. All the points on the line will be in the square and can be described with two coordinates.
By now you are possibly thinking This is child’s play! When will we get to the good stuff?, but if you’ve skipped ahead to 4D you probably realized that we need a hard, concise repetition of lower dimensions to make sense of the higher ones. And if you skipped ahead and found it easy … What are you doing back here? You should have realized you’re not in the target demographic for this text! Fine, stay, but be quiet.
So, anyway, we’ll define a 3D shape from a 2D shape like we just did to go from a line to a square. (There’s a nice animation here, in a post similar to this, but more technical.) We take the square ABCD and put it in a 3D space with coordinates (0, 0, 0), (1, 0, 0), (1, 1, 0) and (0, 1, 0). We dub the coordinates (x, y, z) and we notice that the z-coordinate, the red one, is 0 for all four points. This means that all the points in the square also have 0 as their z-coordinate. Now we move a copy of the square up the z-axis one unit, and so we get a new square at (0, 0, 1), (1, 0, 1), (1, 1, 1) and (0, 1, 1), and we call the sum of all the infinite squares along the way a cube.
Now this is where you need to really start paying attention! What you see here is of course not a cube. It’s a flat image made up of rhombuses, with some of the lines dashed and with a variety of shading. But you perceive it as a cube, because human brains are amazing.
We can make perspective drawings of cubes in various ways. In this particular one we started by drawing the square ABCD “tilted” inwards into the screen, and then moved it upwards. When in a bit we start doing something similar to create a hypercube, it’s necessary to have a more solid understanding of what we’re doing, so we’re going to make a second representation of a cube, one where we start with ABCD perfectly “flat”, facing us.
We’ve already seen what that looks like, it’s the first image in this post. Now imagine we make a cube from this square by moving it inwards. It can’t actually move inwards of course, all we can do is to draw it smaller, use some dashed lines, and let the brain do the rest, but as I just wrote: Human Brains are amazing!
You may notice an “unnecessary line going through A and E and “beyond”. That is the z-axis in this version of the cube. So points A and B are at (0, 0, 0) and (1, 0, 0) respectively, and E and F are at (0, 0, 1) and (1, 0, 1).
Now like the 2D square was bound by four lines, the 3D cube is bound by six squares. And for each of those squares there is an outside and an inside. And if I make a barrier, like putting in a rectangle with corners at AFGD, we cannot get from B to H inside the cube, without crossing through the plane AFGD.
We can also consider AFGD the rectangle we get if we make a straight cut through the cube through these corners. All straight cuts through a cube make planes and the part of the plane inside the cube can be a square/rectangle, a triangle, a pentagon or a hexagon. (I couldn’t figure out all of those in my mind and was slightly surprised by them. Maybe it is obvious to you, but if not, I suggest you go look at slicing cubes next.) AFGD forms a rectangle with sides that are 1 and 1.41 long. (Can you see why?)
What happens when we slice a cube will be important to understanding what happens when we slice a hypercube, but to make things easier for both you and me, we’ll compare with slicing parallel to one (and therefore two) of the bounding squares, which will give us this:
Now this plane has a few important properties. One is that it’s one of the squares we got on the way when we created the cube by moving a square from ABCD to EFGH. Another is that the z-coordinates for its corners IJKL are all the same. And a third is that the coordinates only differ from ABCD and EFGH in the z-coordinate. D is (0, 1, 0), H is (0, 1, 1) and L is (0, 1, 0.6). And we can make an infinite number of similar planes just by using ABCD as a starting point and changing all the z-coordinates to some number between 0 and 1.
Now one important thing to note, which might seem obvious, but will be very important when trying to wrap our heads around the hypercube, the square IJKL is inside the cube, it’s the result of a straight cut through it, but it is not inside the square ABCD. That’s just a result of how we’ve drawn it. If I’d have drawn it in the first representation of the cube, the 2D relation between them would have been completely different. Think of turning this cube, at some point the square IJKL is going to move “outside” the frame of ABCD.
Oh, and one more thing to note about a cube like this. Each square has one other square connected to it at each edge. There are no “free edges” not touching another edge. This is also true of the square. Each of the four lines has a neighbor at each end point. There are no free ends. And each edge has an inside and an outside relative to the cube. You can get from the inside of one edge to the inside of another edge, or from outside to outside, without crossing the boundaries of the squares. It’s possible you’ll want to scroll back and look at this again to really appreciate how impossible it is to actually understand hypercubes once I’ve described what happens to this property when we add a dimension.
And that dimension will be added now!
To get four dimensions we need to add another axis to the coordinate system. And it needs to be 90 degrees to all the previous ones. This is of course not possible in our limited world, so we have to cheat. There are a number of different ways we can do this cheating, but I’m going to do the one that is the closest analogy to starting with a face on view of a square and moving it “inwards”.
We take a cube instead of a square, and we move it “inwards”. Like with the square it becomes smaller, and it looks like it is inside the original cube, but it is actually not. But unlike with the square, our brain does no magic to make us see it as a hypercube. It just looks like a cube inside a cube and we have to do the figuring out how things are connected by thinking about how this is analogous to the 3D version.
So now that we are in four dimensions, every point has four coordinates. A is for instance at (0, 0, 0, 0) and E is at (0, 0, 1, 0). If the corresponding points I and M were actually inside the “bigger” cube, I’s coordinates would be something like (0.2, 0.2, 0.2, 0). But that’s not the situation this representation is supposed to illustrate. Instead we have to try to think of the shrinking of the cube as a movement “inwards” in a 4th dimension, which gives I and M coordinates equal to A and E, except for that fourth one, so (0, 0, 0, 1) and (0, 0, 1, 1).
Also like with the representation of the cube, the bounding shapes are distorted. The 3D cube has six 2D sides that are squares. Front, back and four sides that look like trapezoids in this flat drawing, but are squares in reality. I’ve highlighted one here.
The 4D hypercube has “sides” that are 3D cubes. A “front” and a “back” along the fourth dimension, that look like a big cube and a small cube in the 3D version. (Remember that this is a 2D drawing of a 3D projection of a 4D object!) And for each face of the “front” cube, an “side” cube swept out when we shift this cube to the “back” position (the little cube), which in this projection look like truncated, square pyramids.
And now for the bits that will boil your brain (there are lots of bits that boil your brain in examining hypercubes, but these are the bits I’ve decided to include here). Just like the points that are on the faces of the cube aren’t really in the cube, but are the boundary of the volume, the points in the volumes of the eight cubes that bound the hypercube are not in the hypercube, they make up the 3D surface of this 4D object.
I’m considering coming back to this and explaining it further in terms of coordinates and terms such as n-volume and n-surface in a follow up post. Let me know if you are interested in that. But purely “visually”, imagine that you take a point in the middle of one of the faces of the 3D cube, and a point in the middle of a different face, and connect them with a line. This line does not cross outside a 2D face in any way, it just connects two of them through the added third dimension.
Similarly, a line from any point in one of the cubes can go through the 4th dimension in a straight line to a point in a different cube, without crossing through the faces of the cube. Having trouble picturing that? Well that’s cause it’s impossible. I kind of have doubts myself, but I intend to prove it in the follow up post.
Two more things for this post though. If you do a straight cut through a 2D square, you get a 1D line, remember? If you do a straight cut through a 3D cube, you get a plane, which is a square if you cut parallel to one of the sides. And if you do a straight cut through a 4D hypercube, you get a 3D cube. Yes, it’s bonkers. (It’s so bonkers I’m only 95% is correct. It’s definitely true that if you lock one of the coordinates to a single value, say z = 0.5, the points that fulfill that restriction and are part of the hypercube make up a cube. But is that a straight cut?)
Finally, let’s all think about the weirdness of the fact that we can expand the Pythagorean theorem to hypercubes.
We tend to learn about Pythagoras as a property of right triangles, which it is, but it’s also convenient to think of it as a way to find distance in a cartesian coordinate system. Here’s an illustration to work from:
If we have the coordinates of the end points of c we can very easily find the lengths of a and b, and then we know that c² = a² + b² and use that to find the distance c.
Same goes in a cube:
The diagonal from the front bottom right to the back upper left is d. I haven’t drawn it in, because it’s hard to visualize correctly in this view, but I think you can figure it out. The length of d and so the distance, is given by d² = a² + b² + c². a, b and c are all ninety degrees to each other. (Showing that this is correct using the triangle version is high school math. Try to figure it out from first principles if you can’t remember it.)
Note that if we start at front bottom right and go left and up, we have two different edges we could follow, but only one is ninety degrees to both the previous edges. Same goes in the hypercube where we actually have three options, but only one is ninety degrees to all the previous edges in our path.
The distance between B and P here, if all the edges are length 1, is 2, because e² = a² + b² + c² + d². For the unit square (2-cube), unit cube (3-cube), unit 4-cube, unit 5-cube and so on is √n where n is the dimensions involved, and the square root of 4 is 2.
There’s more weirdness I want to write about and explore, but to do that I need to do some heavier math, so that’ll have to wait for the next post.
I just received an email from my wife about her return flight, and Google automatically recognized it as flight information and gave me an update on it. Self-driving cars are out on our roads, for better or worse. Decision makers are increasingly relying on complex handling of huge databases to figure out how to optimize the resource they have at their disposal.
Now I’m not against new technology. For instance I think we should put more money into developing good self-driving technology, since it wouldn’t be that hard to make them safer than human drivers are today. But it’s important to bear the limitations in mind as systems get more and more complicated. A decision support system can only work with the values (and I mean both the data input and the evaluation of what is “good”) it has been given, and even then it might behave differently from expected, so having knowledgeable humans supervising and evaluating is necessary, and the higher the stakes and the higher the complexity of a decision, the more important it is that a human has final say. (Driving is a relatively simple task really by these criteria.)
Now the existence of computer errors and their use as arguments against computerizing anything is nothing new. It’s all out there, from jokes about having to Ctrl-Alt-Delete your car to more serious evaluations of the risks of computerization. But as computers improve, the errors get more rare and less of an influence on your average Jill’s feelings about computerization. But rare isn’t none existence, and being aware of the causes of and consequences of rare errors will be essential as we evaluate the changes we do to society. And game glitches is a great example to use to teach about this for multiple reasons.
For popular games even the rarest events will occur for many people. Many gamers consider it fun to discover, exploit or just mess around with glitches. This means that glitches in computer games are both found, documented and publicized and recently I stumbled over the world of Zelda-glitch YouTube.
Now here I have to do a quick detour and wax poetic about the wonders of video game physics. Most gamers will know this and most non-gamers will not, but at the heart of any game involving 3D motion lies a physics engine. And a modern physics engine can do fantastic things. Combined with other just as marvelous pieces of simulation software they can create entire, dynamic worlds that, under normal circumstances, seem as realistic as their developers want them to be. If you want characters to perform realistic jumps and throws, they will, and if you want jumps to be superhuman and punches to create fireballs, that will happen to. Modern games, even when relying on an existing physics engine, require a large development team to get everything right.
One such game is The Legend of Zelda: Breath of the Wild, (BOTW from now on) the last in a very popular series of games and the first in the series to involve an open world with really realistic graphics. You can climb every mountain and ford every stream, although the attempt might kill you, and admire the movement of every blade of grass in the wind as you do so. The huge team of developers have thought of nearly everything and made a system that will make the world around your character behave as expected in millions and billions of combinations of events that is completely unpredictable when you let a player lose in an open world.
Well it will behave as expected in almost every case. I personally haven’t had a glitch happen, ever. At least not one that didn’t just crash the game outright, and I’m not even sure I had any of those. Other players have though, and when something odd happens, they try to do it again and figure out what causes it, and how it can be exploited.
Take horses for instance. They roam the BOTW meadows in small herds of a handful or so. They serve as mounts for some enemies. And you can tame one yourself and gain a mount. What you can’t have though is a realistic herd size. That is, unless you do a series of sneaky steps as shown in this video (the link takes you straight to the interesting bit, you don’t need to watch more than 20 seconds or so). Gaming this glitch you can get over 200 horses roaming in the same field and walk among them. Or you can push it up towards 300 and the game will crash on you as simulating all those horses walking around becomes too much for it.
Now that’s just a bit of fun. The people who figured out the glitch took advantage of some behaviors of the game that the developers wanted in a way they wouldn’t have expected. And nothing much changes, except there’s a lot of horses. But the other glitch in the same video is more disturbing. Here you go through some weird steps abusing elements of the game the developers didn’t foresee players would mess with, and the result is not a crash, but a total breakdown of the behavior of physics in the game. Really appreciating how bizarre it is requires some familiarity with the game, but this clip should look weird to anyone. Basically the game loses track of what is up, down or sideways. (There’s a bit in the horse clip where the game fails to turn each horse’s mane along with the horse, but this is more dramatic.)
Now what has this got to do with the future of computerization? The point is that a large team worked for years to make this game behave the way they wanted and the way users would expect. Millions of people have played this game and enjoyed it immensely, not experiencing, or at least not paying much attention to, any glitches. And yet they are there. Physics can break down in the game, and the software has no idea that something is wrong (or right, I don’t mean to anthropomorphize here). It’s obvious to the human eye though, and apparently this error was egregious enough that it’s since been fixed by the developers, but as we continue using computers to take over tasks that we feel they can do better, safer, with fewer resources, or do tasks that we just can’t do ourselves, it’s essential never to lose sight of their inability to notice and rectify obvious wrongs we have failed to account for.
Also, just so no one is fooled into thinking this is an issue with very complicated systems, it doesn’t take much computerization before we humans can be lulled into a false sense of security and fail to notice major errors. Just ask the people who have accidentally traveled to Sydney, Canada instead of Sydney, Australia due to combinations of human error and flawed ticket search systems. 😉 Google could probably have figured that out and warned you, if you also emailed a friend to tell them you were going to Australia.