Friday, 29 August 2014

LEGO Pirate Hit With Super Rogue Wave

Saturday, 26 July 2014

Randomly Infinite

As some of you might know, I am writing a popular book about Bayesian probabilities and science. The first draft, which is a work in progress, can be found for download here and I appreciate any feedback:

I am working on a revised version right now and I decided to expand the sections on randomness and make them a whole chapter. The problem is to know when to stop. Randomness is such an interesting and intriguing topic that one can write a whole book only on it. There are many things that I would like to say about randomness, but I have to leave out otherwise it would change the main theme of the book. 

I will consider writing a book only on randomness in the future, but I still have some consolation on the fact that I can at least share with you those things that, at present, will be left out. One of those things is related to the mind-boggling things you get when you mix the two dangerous concepts of randomness and infinity.

SPOILER ALERT. If you haven't read Contact by Carl Sagan yet, be aware that I will be talking about something that happens literally at the end of the book. Be warned. At the very end, the main character is running a program to find a message that has been left hidden in the digits of the number Pi by the supposed 'designers of the universe'. The program suddenly spills out a sequence of numbers that somehow form the picture of a circle. Now, you might be very truly amazed to know that Carl is right: there is indeed such a sequence hidden in the digits of Pi! That exact sequence, by the way! Have I just changed your life? Before you starting twitting about these amazing news, let me tell you a couple of things about Pi.

Well, everybody knows that Pi is a number which is a result of dividing the length of a circle by its diameter. In flat Eulidean space, which is the one obeying the geometric properties you have learned in the school, this works for every circle. But Pi is a very interesting number in many other aspects. One of them is the fact that it is an irrational number. This means that there is no way to write Pi as a fraction, or a rate, between two other integer numbers. A consequence of this is that the decimal digits of Pi cannot (ever, never) be periodic. What does that mean?

A periodic sequence is one that repeats itself after a certain amount of time. Examples are:


I am assuming these sequences repeat forever (I call the last one the 'Mambo Sequence' by the way). The first sequence has period one, the second has period 2 and the Mambo Sequence has period 6. The period is then, clearly, the number of digits that are repeating. A rational number, one that can be written as a fraction between two integers, always finishes with a periodic sequence. It can take a while to reach that sequence, but it is always there. For instance:


where the last digit '6' repeats forever is a rational number. In irrational numbers, like Pi, this never happens. The odd consequence behind this is that the digits of Pi, which start like:


have all the properties of an infinite sequence of randomly distributed integers from 0 to 9! Each one with equal probability. If you are skeptical, take a look at the two graphs below.

These two graphs appear in my book. They represent two sequences of digits from 0 to 9 with a total of 100 digits. Can you see the difference between them? There is no regularity in any of the two graphs, but one of the above sequences is the digits of the number Pi and the other is a sequence of digits randomly generated by a computer program. Try to identify which one is Pi. There is a way, but it is definitely not by looking at their overall appearance.

Another detail is that the number of decimal digits in Pi is infinite. That is because any number whose sequence of decimal digits is finite IS a rational number. All you need to do to find its representation as a fraction is to multiply it by an appropriate power of ten until it becomes an integer. The number is then that integer divided by the power of ten.

Many of you must know Jorge Luis Borges' story The Library of Babel. In it, Borges imagine a library containing books in which every combination of the letters of the alphabet are present, in a random order. This means that, if you only look at the books with say 400 pages, the library contains all stories and all scientific books that have been ever written or that will one day be written as long as they fit in 400 pages! Even things that haven't been discovered yet! Even stories that nobody wrote yet, but that one day someone will! In fact, because the library is infinite, it contains all books that have ever been written or that will ever be.

Although Borges' library is fictional, it illustrates a truly amazing property of the infinite. When you put together infinity and randomness, you get something even more amazing. It can be proved that in an infinite random sequence, ANY finite sequence of characters appears an infinite number of times! Now, the punchline:

Every finite sequence of numbers appears an infinite number of times
in the sequence of decimal digits of Pi.

And so what? Think about this. In the same way as you can encode computer files in binary form, you can also encode any information in decimal form. If you doubt, just write down the binary representation of any file. That is an integer number. Write that integer in decimal base and voila! This means that every text that has ever been written or that will ever be written can be found somewhere in the sequence of decimal digits of Pi! An infinite number of times! This means that, whatever Sagan's character found in the sequence of digits of Pi, it is not a message from another race, but simply the result of good and old randomness!

If you are worried that there is so much information hidden in Pi or maybe trying to devise a plan to extract future information from it like the Bible Code, be aware that this is useless. Because the digits are random, there is no way to know where the information is before hand, or even which information is correct or not, because the same information appears with all possible mistakes!

Reading the text above again, I cannot stop being amazed myself. I am having second thoughts if I include this in the book or not already...

Update: Some friends are telling me that it's not proved that the digits of Pi are random. Maybe I should have been more cautious when I wrote it. It is absolutely true that there is no proof that the digits of Pi are random and, in fact, there seems to be evidence that it is actually pseudo-random, in the sense that one might predict them if we have a certain formula that was found by a couple of mathematicians.

Still, the possibly pseudo-random sequence of digits of Pi continues to satisfy most tests of randomicity, which although is not a proof that they are random, makes it more probable that it satisfies the most interesting properties, specially the one of having every other sequence written in it. 

Friday, 18 July 2014

The Circle of Life

I have just read this funny circular definition of life on June's Scientific American:

"(...) would be strong evidence of life, widely defined: a biological system that encodes information and uses this information to build complex molecules."

I'll leave to the reader to find out why it is circular.

Wednesday, 16 July 2014

The (New) Meaning of Science

Every word, in every language has a life cycle. Words are used by humans in their daily affairs and humans are complicated creatures whose decisions are affected by a complex interplay between reason and emotion. Because of this, words evolve in such a way that, with time, their meanings go through “mutations” which might eventually lead to such a radical change that they cannot be used anymore within their original scope. The word “science” is not different.

Most modern discussions concerning what science actually is, end up falling in the semantic category. The reason for that is the fact that, at some point, the meaning of ‘true knowledge’ was attached to the word science. The study of methods to discern what kind of knowledge is ‘true’ and what is ‘false’ ended up being associated to the word and, as these methods became successful, the word science acquired a respected status. Humans are attracted by reputation as this result in better chances of satisfying emotional goals. Therefore, the importance of guaranteeing being associated with the word and the status it provides.

The original meaning of the word “science” seems to have been very little ambitious, simply meaning any kind of knowledge. Greek philosophers seem to be responsible of seeking a way to separate knowledge that would actually describe how the world works from that which would not. This was when the word science started to acquire its respectability. 

The S-Method

Let us forget the word “science” for now and consider the following problem. It is undeniable that there are repeating patterns in nature. That is a trivial observation whose simplest example is the fact that the sun rises with some predictable regularity every single day. In fact, that creates the basis on which we define what a ‘day’ is.

The fact that patterns exist allows us to write down sets of rules for these patterns. The problem I want to propose is that of checking if a pattern we think we found is really there or not. This can be thought in terms of a competition. 

The competition consists in the judges writing down a set of rules that generates a sequence of numbers. The judges hire a programmer to create an app that uses the rules to generate the required sequence of numbers. Using that program, the judges generate a dataset which is then given for different groups of people and their task is to find the original pattern, the judges’ rules, that generated the data. Once each group has prepared their entry to the contest, one has to decide which one is the winner. In this case, of course, all that is needed is to check which group gave the correct rules.

The way it is, it is easy to decide who is the winner. Suppose now that, somehow, the judges lost the original rules and cannot remember then. All they still have is the app, but the programmer has already gone on holiday and cannot be contacted. They need to decide which group is the winner. Can they do that?

Indeed, there is a way to select the winner and we will call this the S-Method (‘s’ for ‘selection’). The S-Method is an elimination method. The judges start to generate additional numbers beyond the original dataset and ask the groups to do the same with their rules. Each time a group generates a number which is different from the one generated by the app, the group is eliminated.

Unless two groups have equivalent rules, which means that they always generate exactly the same numbers, the S-Method guarantees that at some point one will find a winner. This can take time, but will eventually happen. 

But there is still one limitation of the S-Method. It serves the objective of finding a winner, but it cannot guarantee that the rule given by the winner corresponds to the correct one. No matter how long you test, although you might be able to catch a failure and debunk the winner’s method based on its predicted next number, one will never be able to tell if the generated numbers will always work for sure. 

Notice that there is one key idea of the S-Method: it requires each group to make predictions about the next number. That is because one wants to check if the rules, or in other words the inferred pattern, are indeed the correct ones.

There is no way to check if the rules work without testing them against the data. If one of the groups simply created a fancy story that would generate only the original dataset but could not be used to generate additional numbers, they would not have identified the original rules. 

The possibility of generating a prediction that can be checked against data generated by the original rule has the name of falsifiability. Entries to the contest which are not falsifiable, cannot be judged. In the case of the contest, they are automatically wrong as the original rules do generate more numbers.

Consider now that we are dealing with nature. We do not really know if there are indeed patterns in every phenomenon. Experience indicates that there is by the simple observation that we were able to find many up to this day. If our guesses about a phenomenon are falsifiable, then we can apply the S-Method to select the best guess and even to eliminate all of them.

However, it might be possible that in nature there are phenomena to which we cannot find a pattern in principle. In those cases, the phenomenon cannot be attacked using the S-Method. It is out of its reach. Fortunately, those situations are rare and do not affect our lives significantly, only emotionally.

The Certified Scientist 

You can appreciate that both the effectiveness and the limits of applicability of the S-Method are well established above. It turns out that, at some point in history, the word ‘science’ started to be associated only with knowledge that could be checked using the S-Method.

Because the S-Method is clear, objective and powerful, it started to yield results. Those who dedicated themselves to check which of those guessed patterns up to that point were valid or not using the S-Method succeeded in selecting the rules that actually worked.

It did not take long for people to see that explanations in terms of gods and spiritual entities for the natural phenomena were not falsifiable. This would not be too critical in principle, the greatest problem is that people started to actually find falsifiable descriptions for those phenomena.

Those people who started to dedicate themselves to tailor falsifiable models for natural phenomena then became the new ‘scientists’. They gathered together and started to teach others. 

The success of this new meaning of ‘science’ made the title of ‘scientist’ a desirable one. Desirable because of the credibility associated with it. And then the scientists started to give certifications for those who studied with them. They created the ‘certified scientists’ and this was the beginning of a new change in meaning.

The problem with certifications is that, at some point, they stop being about the original qualities of the product and become a matter of politics. Those who receive a certification that cannot be revoked will tend to ignore the very rules that allowed them to earn it in the first place when those rules are against their personal beliefs. Because the individuals themselves have the power of certifying others, the certifications start to become degenerate with time.

The unintended effect is that the original meaning of the very words that defined the certification start to drift away. Because now you have ‘certified scientists’ that will not admit losing their certification, they will lobby to include in the meaning of ‘science’ whatever they personally do or think that they should do.

New Science

Finally, the term ‘science’ starts to be associated not with the S-Method anymore, but is now used to describe a profession whose definition bends according to the wills and necessities of those who have the power to give certifications.

Here lies the kernel of all modern discussions about what is ‘science’. Discussing the validity of the S-Method is not an issue, the issue became whether give or not the ‘certified scientist’ title even when one ignores the S-Method. 

Model Engineering

The S-Method, as powerful as it is, is just a selection procedure. It requires models to select. This guaranteed ‘model engineering’ as an important part of what became known as science.

As more data about natural phenomena accumulated and models of the simpler ones were selected, model engineering became more complex. Whenever complexity increases in an area, specialisation naturally follows. This resulted in many certified scientists becoming specialised in model engineering.

Today’s model engineering is a very sophisticated process and mathematics plays a key role on it. Mathematics allows us to concisely describe patterns in natural phenomena, including the ones used by humans to reason. Once these patterns are codified and selected as valid ones, they can be trusted until a reason appears not to do so.

Model engineering is a very difficult area and requires a lot of ingenuity and creativity. In modern times, it also requires a good knowledge of mathematics and a certain ability to work with it. Many of the most famous certified scientists are theoretical physicists because their mathematical ability is recognised outstanding.

The use of mathematics provided a means to build models that go beyond the practical reaches of the S-Model in terms of economic and technological feasibility. There is no known limits to the kind of models that can be engineered, the only constraint being that they should agree with collected data and not contradict those which have already been selected by the S-Method within their limits of applicability.

Many certified scientists will only concentrate on model engineering and leave the task of selecting models to other specialists. There is nothing wrong with that in terms of profession as long as they remember that the fact that a model has been engineered using valid methods still does not mean that the model is the correct rule to describe some natural phenomenon.

Science without the S-Method 

What happens if we keep model engineering and discard the S-Method? 

Many people today are lured into believing that, as long as a model involves mathematics, it is a good model, but model engineering can be completely detached from the S-Method. As a consequence, using mathematics does not per se provide any extra credibility to a model.

Religion and mysticism contain many examples of models which can even be based on mathematics, but nevertheless would either not be vindicated by the S-Method or even not falling under the scope of its application. Model engineering, without the S-Method, falls into the same category. 

Questioning is not Enough

Rebellion against rigid impositions is a good practice. It is by questioning traditional rules that reasoning flaws can be found. However, rebellion for the sake of rebellion is as useless as conformism. One must question things with a reason, otherwise the questioning becomes senseless.

Critics attack the S-Method, or the falsifiability principle, as being too rigid, but ignore what is the original objective that led to it. 

If one wants to change the meaning of science once again from a method to find correct models to describe nature’s patterns to a list of professional obligations, there is very little to do to prevent this. What cannot be tolerated is that this new meaning of science still demands to be recognised as something that achieves this.

Tuesday, 15 July 2014

Game of Life

I have just found this application that allows you to simulate Conway's Game of Life and other cellular automata:

It's an open source program that allows you to change the update rules for two dimensional automata. I have been playing with it a bit and seems very simple and potentially very useful (not mentioning very entertaining). The picture in the beginning is a screenshot of one of the rules.

Friday, 11 July 2014

Post-Empiricism and Data Tables

I was reading Peter Woit's blog and stumbled with this post-modern word: post-empiricism. Apparently, a guy named Richard Dawid, said to be a physicist-turned-philosopher whatever that means, wrote a book about this and String Theory. I haven't read the book and I doubt I will, because I have already read one thousand similar arguments and not even one of them had anything new to add to the discussion. 

But to give you an idea of what Dawid means by "post-empiricism", I will reproduce part of an interview given by him which Woit put in his blog:

I think that those critics make two mistakes. First, they implicitly presume that there is an unchanging conception of theory confirmation that can serve as an eternal criterion for sound scientific reasoning. If this were the case, showing that a certain group violates that criterion would per se refute that group’s line of reasoning. But we have no god-given principles of theory confirmation. The principles we have are themselves a product of the scientific process. They vary from context to context and they change with time based on scientific progress. This means that, in order to criticize a strategy of theory assessment, it’s not enough to point out that the strategy doesn’t agree with a particular more traditional notion.

Let me start by saying that, as Sokal has made explicit, the fact that you find an intellectually good looking word for something does not make that true. In particular, although attaching the suffix 'post-' to a word gives to it an air of modernity and rebellion, that also doesn't give any extra credibility to the concept.
Okay, as I am not reading the book, I have to extract what I understand by Dawid's post-empiricism from the post. It seems to me is that he is simply rephrasing in the most Sokal-like fashion the argument that we have to relax the condition that theories should be testable. He talks about 'god-given principles', principles that 'change with time' and 'traditional notion'. All this, of course, are discourse techniques which mean nothing concrete.

I really, really understand the desperation of string theorists to defend their line of research given the fact that people cannot give credit for theoretical exploration of ideas, but that's not reason to turn to religion and mysticism or starting believing in ghosts, which is exactly what happens when one argues that one does not need to test if something works or not as long as it is interesting. Of course, not all evidence comes from direct experiments. A theory can be tested by comparing it with other tested theories to see if there is any inconsistent, but ultimately, a theory that does not make any testable prediction is nothing more than a data table. It can be a beautifully decorated table, but it is still just a table. I will explain myself. 

Think about the following toy phenomenon: a ball is in a field divided in two sides and it changes sides once in a while. My data set consists of the times at which the ball passes through the central line that divides the field. Suppose now that I have five data points: t = 1, 5, 6, 11, 20. Now, I say to you that I have a theory describing this data. My theory is 

-6600 + 9950 t - 3941 t^2 + 633 t^3 - 43 t^4 + t^5 = 0.

In other words, my theory is that the times at which the ball passes the central line are the zeros of the above polynomial. There is only one problem: there are only five zeros for the above equation and they are exactly the data in my data set. This means that the above equation, even being an equation, is nothing more than the list of points I had before written in a different way.

Any reasonable person will then complain: wait! But you are making the prediction that the ball is never going to cross the line again! And then I say to you: don't worry, it's such a nice-looking equation! Be more of a post-empiricist and give less importance to predictions. Who needs to test such a beautiful theory? Besides, do you have a better theory to describe this data?

I rest my case.

Monday, 30 June 2014

The "Real World" Delusion

If you are a professional of any of those areas that are concerned with human development instead of generating money, you must have heard many times the question 'What is the real life application of your research/work?'

There is an interesting comment about that in the book

What are universities for? by Stefan Collini

I will reproduce that here:
And talking of literature, it’s usually at about this point in the argument that an appearance is made by one of the more bizarre and exotic products of the human imagination, namely a wholly fictive place called ‘the real world’. This sumptuously improbable fantasy is quite unlike the actual world you and I live in. In the actual world that we’re familiar with, there are all kinds of different people doing all kinds of different things – sometimes taking pleasure in their work, sometimes expressing themselves aesthetically, sometimes falling in love, sometimes telling themselves that if they didn’t laugh they’d cry, sometimes wondering what it all means, and so on. But this invented entity called ‘the real world’ is inhabited exclusively by hard-faced robots who devote themselves single-mindedly to the task of making money. They work and then they die. Actually, in the fictional accounts of ‘the real world’ that I’ve read, they don’t ever seem to mention dying, perhaps because they’re afraid that if they did it might cause the robots to stop working for a bit and to start expressing themselves, falling in love, wondering what it all means, and so on, and once that happened, of course, ‘the real world’ wouldn’t seem so special any more, but would be real world’ wouldn’t seem so special any more, but would be just like the ordinary old world we’re used to. Personally, I’ve never been able to take this so-called ‘real world’ very seriously. It’s obviously the brainchild of cloistered businessmen, living in their ivory factories and out of touch with the kinds of things that matter to ordinary people like you and me. They should get out more.
Of course, when faced with the 'real world' question asked by a friend, one has to make a hard choice: either succumb to the temptation of of preaching about the importance of developing the human mind and losing the friend or breathing deeply, giving a smile and changing the subject. I, usually influenced by my wife who has much better social skills than me, choose the latter. However, I still have the hopes that she wouldn't be there one day to prevent me from asking questions like:

So, you are going to have a child. What is the 'real world' application of that?

Friday, 27 June 2014

The Probable Universe

I'm writing a popular book on Bayesian probability called provisionally The Probable Universe. I will leave the draft (remember, it's a draft!) available as a PDF file on my website via the link:

Feel free to download and read it. Notice that some parts are incomplete, drafted, with typos and all other mess that appear in drafts.

Please leave comments, suggestions, requests, corrections and criticisms. Maybe one day I end up even publishing it if it becomes good enough.

Tuesday, 24 June 2014

Defending Philosophy, the Right Way

I have just read a blog post from the always eloquent Lubos Motl:

In this post he argues that philosophy is basically bullshit and philosophers don't do anything useful. Also, that 'shut up and calculate' is the way to go in physics. Of course his completely wrong. It doesn't matter that physicists like Hawking, Weinberg or even Feynman agree with him. They are wrong too.

All of them are wrong in the same way as the rest of people in the world are wrong when they say that theoretical physicists are useless and theoretical physics, in its great majority, is just a waste of taxpayer money.

The argument against philosophy is that it is the wrong way to understand how the world works because it does not stick to science's fundamental principle of falsifiability. There are other criticisms, like the over reliance on non-mathematical language and a melancholy preference for classical physics over quantum mechanics. 

However, all those criticisms miss the point of what philosophy really is concerned about. Understanding how the world works is just part of philosophy, and that's the part that serves as foundation and gave origin to science. But it goes beyond that. Philosophy is a whole thinking endeavour that is concerned with the most important sentence in the universe: WHY? 

Philosophy goes beyond science as it allows itself to ask questions and consider situations which are out of the scope of science. What is 'real'? Is there a meaning to 'truth'? And my favourite: is it possible that there is nothing else in the universe but my mind?

Most arguments against philosophy in that article are simply the result of assuming that just because someone has a philosophy diploma, whatever this person says is philosophy. In the same way that science is not what scientists do, philosophy is also not what philosophers do. Just rambling stupid things without sense is not philosophy. Using a fallacies to support your arguments is also not philosophy, it's gibberish. 

One of the greatest problem of academia is this stupid habit of one area of knowledge to completely ignore and ridicularize the other without thinking deeply enough.

Just a last word about 'shut up and calculate'. Never 'shut up' when doing science. Ever. No matter what people say to you.  

Monday, 9 June 2014

Science is NOT what scientists do

I was going to keep this for the book I'm writing about Foundations of Science, but I have just read a blog post from Sabine Hossenfelder and I could not stay quiet:

Is Science the Only Way of Knowing?

I will have to respectfully, but harshly and completely disagree with almost everything she says in the article. To start with, she says the dreadful sentence that I heard many times:

"Science is what scientists do."

This is a definition that not only is not true, but also useless. Let me justify it. Let us suppose that we accept this as a definition of science. Where do we draw the line? I mean, is everything that scientists do science? When I (I am considered a scientist, by the way) fry an egg, is that science? As some can argue that it is, when I swear because something fell on my thumb, is that science? I am not exaggerating, I'm just showing that there should be a specification of what part of what scientists do is science.

Let us now assume that we can somehow say that some things are not science. Consider Sir Isaac Newton. Was it science when he was trying to draw a map of Hell? When a Nobel Prize winner starts to talk seriously about paranormal things, is that science?

You might argue that we can decide what is science by consensus, but that is more arbitrary. Consensus is a logical fallacy and although it might be evidence of the right direction, it's never a proof. Let us say that somehow most of the scientific community becomes corrupt (that can happen, just consider governments...). The leaders then start to decide what is science and what is not according to what THEY do. Those who don't agree, are not doing science by convention. If you are still doubt that this can happen, I suggest you to look for articles about the present situation in academia in the UK. Check authors like Stefano Colini and Thomas Docherty.

There is another catch: Who decides who will be a scientist? Other scientists, of course. How? By consensus. But how is this consensus achieved? Well, they need to agree on a minimum of knowledge and skills that the person needs to have. They need to agree on a minimum definition of science. If you defined science as what scientists do, you have just achieved nothing. It's a circular definition.

If all science was based in a circular consensus, it would be in very fragile grounds indeed. The point is that it is not. You can construct a definition of science on very rational grounds. The key to that is concepts like consistency, probability theory and Bayesian inference.

Science is a process of gradually incorporating information into models to describe phenomena. It doesn't matter if it's done by "scientists" or by "artists" as long as it is done correctly. By 'done correctly' I mean that information should be incorporated in the model in an unbiased way and that the final model NEEDS to be consistent. What we identify as 'truth' and 'understanding' are very subtle things to define, but the closest we can get to them is consistency. That is the key to science and consistency can hardly be achieved by consensus as thousands of years of politics has showed to us.

So, if one asks me if "science is what scientists do", my answer is a definitely NO. What scientists do is only science when they do it right and 'right' here CAN be mathematically defined. I already wrote a bit about that here:

That document has many missing links, but I am preparing a larger one as I said. I will post things here as I write it.

Should you believe me or Sabine? None. This is not a question of consensus. As in science, you have to search for information and update your opinion accordingly.

Thursday, 5 June 2014

Slender Man cannot be guilty: Because he does not exist!

Source: Wikipedia
One of the many other things I'm interested in besides physics is mythology. Every kind. I am interested in that since I was a kid. When I became a teenager, I played RPG also because I enjoyed the great variety mythologies in so many different universes. When I say mythology, I am also including religion.

These days, there has been news about two 12 year-old girls who stabbed each other to prove worthy to Slender Man. Slender Man is a mythological character that was spread by the internet recently, but seems to have roots in German mythology. The Guardian has an article about the incident:

The obvious reaction of everyone involved was exactly the one we expected: they decided that it's nobody's fault. In fact, they decided it's Slender Man's fault, but as he does not exits, that's the same thing.

We know that there are many variables involved in cases like this, but we also know that there are clearly two great responsibles for that:  the parents and the government.

I will defend my thesis.

Twelve years-old persons are definitely, undeniably capable of telling apart reality from fiction unless they have some cognitive limitation. There are two ways for them to be instructed how to do that.

The first place is, of course, school. Schools HAVE THE OBLIGATION of teaching children that ghosts, fairies, gods (yes, that should be included) and other supernatural beings are creations of human imagination and, although people have the right to believe in them if they want to, the truth is that there is overwhelming evidence contrary to their existence.

If schools are not teaching that, that's because the curriculum does not include it. After 40 years of my life being a student and half of that teaching, I know very well that it's the GOVERNMENT, not the instructors (teachers, lecturers, professors) which decide what is taught in schools. We, instructors, either have the choice of obeying or being fired AND having our careers trashed by 'disciplinary actions'. Of course, we try to smuggle a little bit of sense into the system, but we are not supported by anyone, including parents.

So, clearly, if the school did not teach those children that Slender Man DOES NOT EXIST, then it's the politicians' fault. Politicians, of course, never accept the blame for anything and redirect it wherever is easier. In this case, to a NON-EXISTING fictional character.

Now, even if the school has failed, this is no excuse for parents to try to avoid responsibility. They too have to teach their children the FACT that supernatural beings do not exist. The problem here is that many of them believe that they do! Does this exonerate them of all guilt? Of course not. On the contrary. Parents are responsible for their children independently of what they believe or not. When I was a child and was afraid of ghosts, my father would always say to me that I shouldn't because ghosts do not exist. Many years later, I found out that he believes in ghosts, but he knew rationally that they should not exist and that was what he should taught me.

Yes, those things can be avoided, but those who can and should do something are those we know will never take the blame and look for a scapegoat. Even one that is not real.

Thursday, 29 May 2014

Brains are More than Threshold Units

I was reading the following article yesterday:

Laser mimics biological neurons using light

The article is about a component that can emit a laser depending on the intensity of light hitting it. In technical language, this is called a threshold unit. In fact, a threshold unit is a concept. It is a name used to describe any kind of material/device/stuff which can receive some kind of input, be it in the form of energy, information or anything else, and emit something else whenever the input amount exceeds some threshold value.

The connection of threshold units with the brain is that neurons are a kind of threshold unit. There are many different kinds of neurons. Some of them are constantly emitting pulses of electricity at a certain frequency, while others remain "silent" until they become excited. An "excited" neuron changes its firing frequency whenever the amount of electricity it receives from other neurons to which it is connected goes above some value. As you can easily notice, this value is the threshold in a threshold unit.

The simplest threshold unit is the famous perceptron. The perceptron is not an object, it is a mathematical model that was developed to mimic the main function of the neuron. The perceptron is a mathematical structure with a certain number of "boxes" that work as input entries and one output box that spills out a number whenever the inputs get larger than some value. These models date back to the 1950's.

Because the perceptron is a mathematical model, one can fit any threshold unit to something similar to a perceptron. During decades, perceptrons and other kinds of neural networks created by connecting perceptrons in many different ways have been studied. They are capable of memorization and learning under certain limits.

Then, might ask, why am I being picky and saying that the brain is more than threshold units? Neurons are threshold units and threshold units can be used to construct neural networks, but what makes a network learn is the pattern of connections of its units, not the units themselves. The point is that knowledge is not stored in the units, it's stored in the links they form with each other. If you put trillions of threshold units in a regular square network, nothing will ever happens in terms of learning. Without learning, thinking is not that great... So, what is missing?

The answer receives the name of plasticity. This is the capacity that neurons possess of creating new connections and cutting old ones. This is what changes the patterns and makes memorization and generalization (the two pillars of learning) possible. Although faster threshold units like the ones in the article might improve the speed of information transmission, that is not a guarantee that it will improve higher abilities like creativity and understanding. It might lead to faster reflexes, for instance, but not faster learning as it has nothing to do with creating and severing connections.

It is not that the article does not describe an interesting work. It is, but one needs to be very careful with the actual implications of each line of research when we read about a new 'big breakthrough' every week...


Friday, 23 May 2014

Nobody Likes Serious Research, People Like Easy Rules

I read this article on The Guardian these days:

A big, juicy burger to anyone who knows what healthy eating is any more

The article is about how there are many contradictory recommendations about what constitutes a healthy diet. The article is not very good and does not have many useful information. It is another of those thousands of articles trying to sound smart and sarcastic and achieving very little. Too bad too many people love this kind of article.

Anyway, what bothered me more was one of the comments of some readers saying that more 'serious research' on the subject was needed. Let me state something concerning this: the general public don't like serious research. The one thing that people want is someone to tell them rules like 'you have to eat six tomatoes per day' or 'never eat sugar'.

Serious research will not give you those rules because they simply do not exist. It is not difficult to understand that the amount of nutrients one should or should not eat depends heavily on a huge number of variables. It depends on genetics, health conditions, on how much exercise you do and even on the weather features of the place you live. All of that can affect the way your body metabolizes food and how much of each nutrient is needed.

All those articles that you see in the news about correlations between a certain amount of food and healthy problems are interesting, but their limitations should be considered. Usually they only represent correlations, not cause and effect. They are also difficult to analyse because many variables, which are themselves hard to control, might be affecting the results. Also, samples are usually small, which does not help in the statistical analysis, especially when they rely on wrong techniques.

The reason we have lots of this kind of articles with great repercussion is because that's exactly what people want. If there is one thing that I've learned in my academic career is that people do not want to support serious research, they want to support research that have a 'clear conclusion', that is fast and that will give them a rule they can follow and then blame others if it doesn't work. That is, surely, not serious research.

That's, unfortunately, how most research today works. It's not the scientist's fault. A scientist has to survive and has to do whatever there is money to do. We live in a world in which people don't mind lending their money for free to bankers and at the same time think that scientists are robbing society's money when they try to understand something deeply.

If you really want to understand what constitutes 'healthy eating' you should be prepared to support research that will probably take decades and will not result in a recipe book. For most things in life there are no simple rules.

Stating the Obvious Again

Good to find that I am not the only one to see that. By the way, before you smile and nod your head agreeing with the words, take a moment to reflect if that is not the way you actually see education.

Thursday, 9 January 2014

The Number of Neurons in the Brain

There's a nice TED talk circulating on the Internet by a fellow Brazilian scientist about the number of neurons in the brain. Here it is.

The talk is quite interesting, but if you have ever read anything here on my blog (which I confess I update once every 6 months) you know that I do not like unjustified hypes and what I've been reading on the Internet seems to be characterizable as such.

I can understand well why the talk has been so popular. She starts by saying that there is a number in science, the number of neurons in the brain, which she looked everywhere to know where it comes from and she couldn't find. That's popular. People love when someone says that. It's like 'those old scientists were so full of them that they didn't bother to check if that was true or not!' She says that nobody could tell her the origin of the number. The number, by the way, is that that the brain has about 100 billions neurons.

Then she proceeds to talk about brain size in different animals and how the size of the brain is not a determinant in the level of intelligence. After a while, she describes her work on an experimental method to count the neurons with a higher precision than before. She finds that the previous number was wrong. It's not 100 billions, it's 86 billions.

Finally, comes the twist. She first says that the neuron density in the primate brain is higher than in other classes, but is almost constant among the primates. If it's the same, what differs us from the other primates? She talks about the amount of energy that the brains consumes and finally concludes, for the surprise of everyone, that cooking is what makes humans different! That's because by cooking we can digest the food better and we liberate time for our brains to develop other activities! We are humans because we cook! 

I'm not being sarcastic, I did enjoy the lecture. However, I must make some harsh observations.

First of all, let me talk about the number of neurons in the brain. 100 billion neurons is not an exact number. Never was. It was an ESTIMATE. If you are not a scientist, than you are excused of not knowing what we mean by an estimate. When we scientists estimate a number, we are mainly worried with something which is called order of magnitude. This is roughly to estimate how close to a power of 10 is the true result. If you pay attention, 100 billion IS the same order of magnitude of 86 billion because 100 is very close to 86. In fact, it's amazingly close! It could be any number, but it's almost the correct one. This cannot be a coincidence. A number so close to the real one MUST have some explanation, otherwise it would be extremely lucky to get it!

I suggest you to read the following book:

How many licks? Or how to estimate damn near everything
by Aaron Santos

This will help you to understand that an estimate is different from a precise measurement. They serve different objectives.

There's one thing that I dislike in the talk. The speaker says that she looked everywhere in the literature to find where that number came from, but she couldn't find and no one knew. Well, I decided to try my own literature review. I went to Wikipedia and looked for neuron. If you click the link and go to that page, you will see that reference number 27 is the article:

The Control of Neuron Number, by Williams and Herrup

By clicking the above link, you can read the HTML version of the paper for free. There, under the section 'Total Neuron Number in Different Species', you will find a list of references to experimental work that estimate the number of neurons in the brain to be, guess what, around 100 billion. In fact, there is the following reference from 1975

Lange, W. 1975 Cell number and cell density in the cerebellar cortex of man and other mammals. Cell Tiss. Res. 157: 115–24

This reference (have I said it was from 1975?) estimates the number of neurons to be... around 85 billion. I am not sure what she meant exactly when she said that she couldn't find in the literature where the number came from, but the above search took me around 10 minutes including the time to browse the papers.

Once again, don't get me wrong. I'm not criticizing her work. I think that measuring things with higher precision is extremely important and I'm sure her work must be extremely good, but facts are facts.

Finally, although the talk does not say that explicitly, it makes you think that what makes humans different from other species is cooking. That is definitely not true. If it was, why the other primates have not copied us during all this time? Cooking requires the use of tools. Requires the use of fire. It seems reasonable to say that cooking brought us an evolutive advantage. Other things did that as well. Still, there is more to the human brain than cooking and it seems clear that there was something different even before cooking. Cooking might have helped to increase this difference. Writing did that as well. Don't even talk about science.

I feel obliged to repeat one thing here. Question everything. Always. Especially things that are nice to hear. Those are the ones which will probably be too good to be true. No matter who says that. Especially me.