Experience and Education by John Dewey


Originally posted at Gamelier.

I’m not accustomed to reading philosophy, but really enjoyed reading Experience and Education, by John Dewey. It’s a slim book of not even 100 pages, but is beautifully written, exceptionally clear and intelligent.

Experience and Education was written in 1938 as a followup to an earlier book, Democracy and Education, which he had written in 1916. It’s incredible to me that already at that time there was the notion of the “progressive” vs “traditional” education. Progressive education includes learning by doing, problem solving, working in groups, and personalizing education to fit the student, among other things. Interestingly, Dewey frames these schools of educational thought in reference to systems of government. The traditional schools were autocratic, the progressive schools democratic.

Dewey starts Experience and Education by stressing the need for an “educational philosophy” based on experience. I gather that he fears a backlash from the traditional viewpoint that sees the progressive schools as disorganized and lax. He points out that creating a school around a rejection of previous ideas is not the same as beating a viable new path. The book lays out the foundations for such a philosophy. Dewey places “experience” at the center because he sees education in general as a series of experiences, each of which changes the person experiencing them, impacting what experiences the person will seek or avoid in the future, and what they will learn from them. Certain experiences can stimulate growth, widening the possibilities for the person, whereas others can stunt growth, making them avoid whole areas of potential experience.

This reminds me of James Paul Gee’s description of a “damaged learner” in What Video Games have to Teach Us about Learning and Literacy. Once damaged by a bad experience in a school subject, which made the student feel overwhelmed and bored, the student could be put off the subject whenever possible, and even build their self-image upon a rejection of the subject.

Next, Dewey discusses how discipline in schools comes down to the social context of the environment. As Dewey says: “The [traditional] school was not a group or community held together by participation in common activities. Consequently, the normal, proper conditions of control were lacking. Their absence was made up for … by the direct intervention of the teacher, who ‘kept order’. [In the new schools], the primary source of social control resides in the very nature of the work being done as a social enterprise in which all individuals have an opportunity to contribute and to which all feel a responsibility.” Once again, this sounds to me much like how Lee Sheldon sees the teacher’s role as a “dungeon master” in The Multiplayer Classroom, organizing opportunities for learning rather than directly passing on ideas to students.

And yet, not just any kind of communal experience will do. The later chapters of Experience and Education expound on the meanings that Dewey attaches to “freedom” and “purpose”. For Dewey, the kind of freedom that should be prized in schools is not the ability to do whatever one desires at a particular time, but rather the “power to frame purposes, to judge wisely, to evaluate desires by the consequences which will result from acting upon them”. In short, the power for students to fix goals and set out concrete steps to achieve them.

Finally, Dewey points out the connections between the experimental method in science and the progressive model of education. In science, ideas are not simply received as final truths, but tested as hypotheses. Experiments are carefully designed, observed, recorded, and analyzed. Once again, playing games can also be seen as performing the scientific method in miniature, though most often players do so informally.

Vehicles by Valentino Braitenberg

With the excitement and activity of A-MAZE calming down, I finally have a moment to write about a rich book that I will want to read again and again- Vehicles: Experiments in Synthetic Psychology by Valentino Braitenberg. Since it was written in 1984, I’m truly lucky that a colleague from the Gamelier recommended it to me, or else I doubt I would have ever discovered it.

The principle of “Synthetic Psychology” invoked in the title is that relatively simple machines can exhibit complex behavior that we would classify as “living”, “instinctive” ,” willful”, “smart”, “attentive”, etc. The author does this by setting up a series of thought experiments. He begins with a “vehicle” with a single motor activated by a sensor that is attuned to the temperature of its immediate surroundings.

Things get immediately more interesting in the next example, where a vehicle has a motor and a sensor on both sides. Depending how the connections are made between the motors and sensors (same side or crossed), and whether the connection is positive or negative, the vehicles would circle a source of heat, charge right at it, run away from it.


This is only the beginning. Each chapter introduces new layers of complexity into these simple vehicles and then explores how the vehicles would behave in such and such circumstances. He explores non-linear activation, thresholds, connections networks, selection, resistant connections, and many others. Some of these concepts reminded me of the little I learned about neural networks in school, but the author explains each wrinkle gently enough that I don’t believe any prior knowledge is necessary to enjoy it.

Ultimately, the book nicely sidesteps the question of “intelligence” that pollutes common discussions of artificial intelligence. By the time you are in the middle of the book, you have to admit that the machines behave in a way that you only associate with living things, and yet they are made only of sensors, motors, and wire.

The appendix to the book is filled with biological references for the mechanisms introduced as purely mechanical. Given that the book is now 30 years old, It would be interesting to know what new has been discovered since then.

Oh, and there are lovely line drawings placed in the middle of the book that evoke faraway worlds and fantastic machines. This reminds me of what an incredible video game it would make, especially the complex “social” aspects of having a world of such vehicles!

Capture d’écran 2014-05-01 à 17.48.00

Cross-posted from the Gamelier

What Video Games have to Teach Us about Learning and Literacy by James Paul Gee


“What Video Games have to Teach Us about Learning and Literacy” is a book about how video games motivate players to learn how to play them, despite or even due to their complexity and difficulty. James Paul Gee compares how players learn video games to how people learn in school, and discusses how schools and other learning environments would benefit from imitating these aspects of video games.

In that way, the book reminds me of Jane McGonigal’s “Reality Is Broken: Why Games Make Us Better and How They Can Change the World”, which also takes the tack of discussing how video games motivate players to take on difficult challenges, with the idea of applying these tactics to other domains (see my review of Reality is Broken). One difference is that where McGonigal focuses on forming personal habits (like weight loss or cleaning the house), James Paul Gee is more interested in school learning.

For me, the most interesting sections of this book dealt with the importance of identity. James Paul Gee argues that learners take on an identity with regards to a field of study. The identity can very well be positive- someone who is good at the subject, who learns quickly, performs well, etc. But they just as easily be negative, in which case the learner will be scared and put off by taking on the subject in the future. The author says that such a learner is “damaged”, and that such damage is difficult and time-consuming to repair.

This rings true of my own experience. I always did fine in school, but definitely formed negative relationships with certain subjects. In particular, despite five years of studying Spanish in junior-high and high school, I never really learned enough to converse. I therefore decided that I was a “language idiot”, and that I was forever handicapped in that area. I suppose that this demission was comforting, since it meant I didn’t have to try. When I ended up in France, it was so frustrating to not be able to understand and relate to those around me that I was extremely motived to learn. And the reality is that I can learn languages just fine. I still wouldn’t say that I’m “talented” in languages, but putting in hard work over a long enough period of time is likely the secret to learn anything at all.

The author show how the notion of identity extends to the people who work in the field. For example, doing science effectively makes the learner a scientist. The closer the learner associates themselves with the values of a scientist, the easier this learning becomes. Once again, the opposite is also true. A person could be easily put off a field by not wishing to associate themselves with the a negative identity they associate with it.

This now makes a few books that I’ve read which make this argument that school teaching techniques should import game design principles. But though I accept that video games mentioned do teach something, and may teach them very well, it is hard to find an example of an existing game that teaches something of value outside the game itself (besides side-benefits such as hand-eye coordination, willingness to experiment, or positive self-esteem). May it be that not enough games are built around real systems? Or is it fundamentally harder to get people to play a game about physics or language than it is to get them to jump on platforms, associate colors, and aim at zombies? Could it be that games mostly motivate people to learn intuitively, and not formally? Is school just not playful enough, or is it just harder to make certain subjects both playful and meaningful at the same time?

Ultimately, both educators and game designers are asking a person to invest their time and effort. If you don’t believe that the benefits are worth the investment, than why put in the time? Many (perhaps most) games offer some kind of immediate benefits of pleasure, both spectacular and of solving problems. Could school subjects offer the same?

Crossposted from the Gamelier

How many articles on computer science can there possibly be?

Meet the PAF Peacock


I just got back from a 3 day trip at the beautiful and mysterious PAF, in the countryside near Reims, about 2 hours east of Paris. The place is made for artists, dancers, and musicians to work, think, and play. And at only 16€ a night, an incredible deal. An ancient monastery, it’s still filled with objects of previous grandeur like out-of-tune pianos and stringy tapestries, but who can say no to a kick-ass ping-pong table?

But as their website says, it’s a place for “production” not for “vacation”. And that’s what the 9 of us were there to do. Import Wikipedia into KnowNodes, visualize it on a graph, and let users quickly find these articles when making connections.

Know your Nodes

So what is this KnowNodes project, you may be asking. Dor Garbash‘s dreamchild, it is a connectivist orgasm, a sort of map of human thought and knowledge, but focused the connections between resources (scientific articles, blog posts, videos, etc.) rather than on those resources themselves. Students could use it to find new learning resources, researchers could use it to explore the boundaries of knowledge in their field, and the rest of us just might love jumping from one crazy idea to the next.

Much like a new social network, one recurring problem with getting this kind of project of the ground is it needs good quality content to jumpstart it. And what could a be better source of quality information as the world’s largest crowdsourcing project ever, Wikipedia?

Now down to the gory details. How big is Wikipedia, anyway? Well, according to the “Statistics” page, the English-language site is at 30 million articles and growing. Wikipedia (and the Wikimedia platform behind it) is very open with their data. You can easily download nightly data dumps of their database in different formats. But here’s the rub: the articles alone (not counting user pages, talk pages, attachments, previous versions, and who knows what else) still weighs in at 42 GB of XML. That was a bit too much for our poor little free-plan Heroku instance to handle.

So, we came up with a better idea: why not just focus on a particular domain, such as computer science? That way we could demonstrate the value of the approach without overloading own tiny DB. Now, we realized that we couldn’t just start at the computer-science article and branch outwards, because with the 6-degrees nature of the world, we would soon end up importing the Kevin Bacon article. But Wikipedia has thoughtfully created a system of categories and sub-categories and sub-sub-categories, and anyway, how many articles under the Computer Science category category could there possibly be?


Hmmm, let’s find out. We wrote a node.js script that uses the open Wikimedia API. The only way to find all the articles in the Computer Science category hierarchy is to recursively ask the API for the categories within it, do the same with its children, and so on, until we reach the bottom.

The nodemw module came in really handy, as it wraps many of the most common API operations so you don’t have to make the HTTP requests yourself. It also queues all requests that you make, and only executes one at a time. That prevents Wikipedia from banning your IP (which is good) but also slows you way down (not so good).

Enough talk, here’s what we came up with:

And so we launched the script, saw that it was listing the articles, and walked away happily, bragging about how quickly we had coded our part as we watched the peacocks scaring each other in the courtyard.

And when we came back a few hours later, and the article count had surpassed 250000, Weipeng suspected there may be a problem. He started printing out the categories as we imported them, and sure enough we saw duplicates. That was the first sign that something was wrong. The second was when we saw that we had somehow imported an article on “Gender Identity”. That doesn’t sound a lot like computer science, does it?

On further inspection, we found that our conception of how the category system worked was very wrong. It turns out that categories can have several parents, that pages can be in multiple categories, and that categories might even loop around on themselves. This is very different than the simple tree that we had been imagining.

Time for a new approach: we simply limit the depth of our exploration. Stopping at 5 levels was about 110k articles, and 6 levels gave us 192k. We couldn’t find any automatic criteria to say whether all these articles really should be part of the system, but this was about the number that we were hoping for, so we stopped there.

Wikipedia -> KnowNodes

Now that we had a list of articles, time to actually put them into the database. Time-wise, it probably would have mad sense to go through the XML dump in order to avoid making live API requests. But then this wouldn’t help us if the users were looking at a new article outside of those we were searching for. And so we created a dynamic system.

The code in this case might not make a lot of sense to anyone who hasn’t worked on the project, but the idea simple enough. Convert the title to the url of the article, download the 1st paragraph (as a description), and insert it into our database. The 2nd part turned out to be much harder than we had thought. Wikipedia uses their own “Wikitext” format, which you wouldn’t want to show by itself. There actually are quite a few libraries to convert from Wikitext to plain text (or to HTML), but very few of the Javascript ones worked reliably in our case. The best we found was txtwiki.js, which really is quite good, except even it fails on infoboxes (which unfortunately are often placed first on the page, messing up our “take the 1st paragraph” approach). In the end, Weipeng found that we could simply ask for the “parsed” or HTML version of the page, and take the text between the first “<p>” tag we found.

Importing a bunch of isolated Wikipedia articles does not create a map of knowledge, making connections between them does. The Wikipedia API provides at least 3 different kinds of links: internal (to other pages on the site), external (to the general internet, as well as to partner sites like WikiBooks), and backlinks (other Wikipedia articles that point to it). We query all 3, find which ones already exist in the database, and setup a link between them.

Code-wise, there’s not much to show that isn’t tied intimately into KnowNodes. Nodemw is missing a method to get internal links, though, so here it is what we wrote:

One foot in front of the other

The last step in this journey was going through the article lists we had generated and making the calls to our own API to load the Wikipedia article. This seems straightforward enough, except that it is bizarrely difficult to read a text file line by line in node.js. Search StackOverflow and you’ll find a bunch of different approaches, including using the lazy model, which works pretty well. But since I knew that our system could only make one Wikipedia request at at time, and that each Wikipedia article involves at least 4 requests (for the article and the 3 types of links) there’s no point overloading the server. I just wanted to read one line at at time.

Line Reader to the rescue. A very minimalist API, but which allows you to asynchronously declare when a new line should be read, and therefore perfect for my needs.


Bonus: Drunken graph walking

Well Weipeng and I were puzzling over inexplicable errors, Bruno was pondering a bigger question: Now that we have all these links, how to know which are more important than others? Among the many planned features for Knownodes is a voting system for the links, but couldn’t we get a good idea from the link structure that already exists on Wikipedia?

Bruno came up with a “friends of friends” approach: Given an article A and an article B that it links to, count the number of articles that that A links to that also link to B. What’s nice about this approach is that it imitates a random-walk along the graph. Imagine you are on the Wikipedia article of A, what is the chance that by following the links you will end up at B in 2 clicks?

In practice, these numbers tend to be very asymmetrical. A subject like “Python” may have a lot of links towards “Computer Science”, but only a small fraction of “Computer Science” links lead to “Python”.

We considered coding this in to the Wikipedia importer, but there’s no reason that the approach shouldn’t work for any type of node and link in the system. And why not learn about querying a graph database in the process?

This was Bruno and I’s first time writing Cypher queries, so I doubt this is the best way to do it, but this is what we came up with:

Although Cypher lacks some documentation (mostly in the form of examples), it actually makes a lot of sense once you start working with it. The graphic representation of the links is a big plus, and the rest of it is reminiscent of SQL for those of us who have used “normal” DBs before.

And there you go. 3 days of good work and good times. Next step? Let’s get a good search box on the KnowNodes front page!