In the last 2 week I was unavailable because I had a “monstrous exercise” from university. Everyone indeed complained it was huge, but when other people saw the size of my code (2000 lines) they were shocked – this was because my code was about twice larger than the one of other people. It’s not that I suck at programming, and it’s not that I re-invented the wheel or anything like it. It was simply that my code was safe, where the code of most other people I saw wasn’t because we didn’t have to make it safe.
The assignment was to build servers and clients for a Nim game with several players (instead of just 2 like the original version) along with a build-in chat. What was supposed to be the point of the exercise was to make the server non-blocking, i.e. to use select() with only one thread. For example, this means that if we temporarily can’t read data from one client, we will still check if we can send him data or communicate with other clients. But let’s leave that technical part, since the thing that actually made the code complex was the other “minor” things.
Note: I have no criticism at all at the course staff of this specific course, It was simply an example of a university policy.
In general, when programming academic tasks, and especially when it’s in C, unless it is a course in security or some other special cases, you are allowed to assume the following assumptions:
- Input from the user is valid (Less often than the rest of the assumptions, but still common)
- We won’t run out of memory (in C, malloc() won’t fail)
- Data received over the network is in the format we expected
- No one will try to brake our system maliciously
Now, let’s analyze in how many places we can attack the server/client from our exercise:
- Entering a wrong move to win the game
- Example: Enter “A -5” instead of “A 5” to add 5 cubes to a stack instead of deducing (which is what that should happen)
- Potential Result: Win the game by cheating
- Sending large chat messages to attack client/server
- Example: The protocol requests to send the size of the message to be read over the network. Most people used a short (16 bits) variable for that = meaning 65Kb message sizes are legal!
- Potential results:
- Buffer Overflows
Most people allocated a static buffer of size 1024 bytes (1Kb) for receiving messages since someone asked if 1024 is a valid assumption on input length. Reading 65Kb to a one Kb buffer will crash our program or lead to execution of arbitrary code.
- Wasting resources
Allocating 65Kb for each client (to read chat message progressively until receiving them is done) is a lot of memory
- Abuse the lack of timeouts on clients
- Example: Connect with many clients, and do nothing after the connection was initialized.
- Result: Can DDoS the system very easily, after finishing the maximal available number of connections, or waste all the resources if no such limit is set.
- Send messages over the network in invalid format (can happen if client/server are broken)
- Example: Send random binary junk
- Result: see server crash or behave strangely
Out of the above problems, I tried to take care off most of these problems, and I believe my partner and I took care of 90% of these cases. This is where we made our life harder – we didn’t have to handle any of these!
Now, obviously there were no requirements to handle these cases. This was done in order to allow students to focus on the subject of the task – non blocking network communication. And that is indeed something which should be done – otherwise you’d spend most of the time of the exercise (like me) on things which are barely relevant to the current subject.
So, after stating that we must take these assumptions, why am I writing this post? Because there is not even a single course in security and/or secure programming which is mandatory! Or at least this is how it is in my university (and I’m in a relatively respectable one). This means that a student can finish his entire degree without taking care of these problems even once! The implications of this can be devastating – if we code like we did in university, in the outside world, our programs will have more security holes in them that Swiss cheese!
Three weeks ago, I had to give a lecture (in a seminar) about common programming mistakes that make software unsecure. At first I thought most of the mistakes I was going to discuss were ridiculous, but then I saw them pile up in more and more “respectable” places – both in commercial companies (Apple, Microsoft, …) and open-source organizations. And like other programmers, I also do these even though I hate to admit it.
So, what can Universities do about it?
- Changing the requirements of all exercises to be secure
- Bad solution! The amount of work will make all students crash as if someone was DDoSing them :P
- Mention the risk of each assumption inside each exercise description
- Add mandatory courses/lessons in application security and/or safe programming
- Instead of a one time experience during your studies (one mandatory course/lesson), have one task that should be safe in each course (where applicable)
And what can we do about it?
- Try to show people we know were their code is dangerous, and increase their awareness.
- If possible, try to attack their code and show them the actual risks! (because I heard enough people saying “The risk is only theoretical – nothing will happen”)
- If you are a student, show this to the professors in the department of application security to make them tell the university to change it.
- Some of them weren’t aware of the fact the state is so bad, and when they heard they started working to integrate the solutions mentioned above
- But this is less likely to work…
- Make our own code safe!
That’s all for today. Spread the awareness about security, and if you (as a student/programmer) found this post helpful, please let me know.
One last thing – I applied to GSoC (before the deadline), I’ll post my application later :)