Skip to content

Extended Intelligences

Extended Intelligences

“My suitcase’s main purpose is to be part of my extended mind, reducing complexity by sorting my stuff into neat little packages my low powered brain can easily grasp” - me, paraphrased, when I moved to Barcelona - exhausted by my tendency to hoard - overextending David Chalmers’ ideas until they snap

“it’s probably better doing it this weekend […] because next week you’ll have your mind filled with […]other things” - Pau, a man who was right about when I should have written this

General reflections

AI used to be a runnning joke in my former flat (sometime during Covid, I think). One of my flatmates kept talking about the danger (and potential) of AI, an AI that might break out of its man-made cage and paperclip everyone. While this was never my main perspective on the topic, I have - in the past - tended to see AI as an overhyped and abstract topic, pop science articles about it better ignored, its practice to be left to math PhDs, and not an impatient mind like mine.

This course changed my perspective on machine learning. Some of what can be done in this area now seems like an activity akin to building Arduino-based projects, 3D-printing something, or screen-printing a t-shirt yourself; an activity that does require some skills, but skills that almost anyone can learn.

I was impressed by the amount of resources available for free, and the amount and quality of openly accessible and usable datasets and models. I feel like this accessibility was not as poronounced when I last paid attention to the space.

Understanding that the way many of these models work is by optimising search in a multi-dimensional space where proximity represents similarity in terms of certain properties really helped me to better grasp what is happening in machine learning.

Taller Estampa’s artistic approach also reminded me of how much fun, and how influential it can be to engage with a technology playfully.

All of this made me even more enclined to engage with any topic with the constructive disrespect of a DIY ethic.

The connection between such an ethic and a certain counterculture raises the question, what are the most punk reasons and ways to ‘teach’ machines?

Project inspiration

diyosaurus - our imagined model

Throughout this course, we engaged with the material by imagining a machine learning model we might build, and attempting to identify some of its elements, inputs, and outputs.

The group I was a part of was interested in a model that could take as input an image of a thing and oputput instructions for how to make it oneself (alone or with others, at home or in one’s neighbourhood).

We imagined many additional inputs that would make the outpout useful, such as skill levels of the user, location (which gives us information about which materials are readily available / abundant in a particular place), some kind of assessment of the environmental impact, etc.

But even the basic functionality imagined (picture to generic instructions for making) is something I would find exciting, because I think one of the reasons people don’t make more things themselves is that it the cognitive overhead involved in finding the right way to do it.

In a more subversive (but also more speculative) vein, an advanced model of this kind could perhaps automate the reverse-engineering of products to replace them with diy/o equivalents.

One of the datasets we found on kaggle as we explored the possibilities of building this model was the ‘huma know-how datset’. This is related to a project which seeks to create a machine- and human-readable language for production process steps. This interests me n the AI context, but also in itself. I think it would be very interesting to create ‘recipes’ for the diy/o production of certain things using such a language.

As a proposed way of building the model in question, we landed on a two-part structure for the basic functionality;: 1) translating the picture into a description that is useful for production purposes, e.gĀ einvolving components, materials, etc. 2) converting this description into a set of instructions for making the thing.

Some of the tips our instructors gave us as feedback included the following points:

  • the model likely requires a combination of semantic web (structured) and neural network (unstructured?) approaches
  • there is a lot of work on recipes (for cooking) that could be leveraged for this project

Regarding the second point, the metaphor has permeated my thinking for the past few weeks. I would really like simpel to follow instructions (recipe.-style) for making things. Also, OpenAI’s Davinci model (‘cheaper’ than some other ones) is able to write a coherent “recipe” to make, say a thermos bottle. But the recipe in question is not actually a reciep in the sense that it is actionable as a plan for making somehting. This reflects current text-based AI’s core skill of being a good career person…

I’m looking forward to what all of this will lead me towards.

This text was not written by gpt3, I swear.