Blog Uncategorized

The Hydrus Project wins Intel’s Embedded Systems Competition

Me, our friend Eloi, Êmili, Guilherme, and Professor Giovani

After many months of hard work and dedication, our team accomplished its ultimate goal, and we won the competition!

The Hydrus project (as per these posts) was successfully completed a single day before the competition. The last test was in the last possible day. All of our presentation material was put together and edited right before the date. We lost pieces of the boat in transportation, etc. Towards the end of our endeavor, everything was insanely chaotic. But we thrived.

Personally, I am hugely grateful to team members Êmili and Guilherme, who always put a lot of effort and whose belief in the project never faltered, even at times when I seemed aimless, or just plain going crazy.

In the end, it was a success. Not everything worked, and our execution was not perfect. Nevertheless, we ended up with a perfectly workable system, and we like the design of the software and the boat itself.

Here are the final report and presentation slides. Also a short video demonstrating it in motion:

I have chosen to use the boat as the subject matter of my term paper. During this process, it should be revamped to work with other control units (Raspberry Pi) and also be made possible for it to be driven by EPOS, the Embedded Parallel Operating System by LISHA,

Oh yeah, one last thing: the prize! We were awarded a trip to visit an Intel Facility in the United States.Can’t wait!!

Blog Uncategorized

Hydrus Project Update

(This should have been published months ago (nov. 2016) but I was so busy with the project itself I kinda forgot it was on the queue)

A lot has been happening in the Hydrus Project front. The deadlines are getting tighter and there is still a ton of stuff to do, for the whole team. In this post I’ll go through what is new, what was accomplished, and the what main challenges were for getting there. Finally, I’ll summarize what is left to be done and ponder over any loose ends.

Some of the boat’s electronics, as assembled in our test bench


System Board

In an effort to consolidate the electrical and electronic project of the boat, a new system board is being finalized. It has more of an integration role, presenting a ton of connectors for the different sensors and subsystem modules. It also contains an instrumentation amplifier for the PH probe, and a slave Arduino Nano for dealing with the ultrasonic sensor array. The Nano can also be used for interfacing with other less important sensors, or other platform features, should the need arise. 
The whole project is being done in KiCad, an open-source EDA suite. A custom component footprint for the GPS module had to be drawn, but all in all, most needed stuff was already in place. I really enjoy working with KiCad. It is simple, yet featureful, rarely getting in my way. And the 3D preview is a very nice bonus. 
The hardest part of the layout process was my insistence in using only a single copper side. This is very important for me since we do not have a PCB prototyping machine, so the production techniques at our disposal are rudimentary. At best. But after a lot of swapping components around and painstakingly tracing everything by hand, it was possible to arrive at a viable layout, with only a single jumper resistor.
KiCad has come a long way since I first tried it many years ago
of the board is expected to begin shortly, using PCB homebrewing
techniques. If only we could make it look as good as in the 3D preview,
with solder masks and silk layers and all… boy I’d be really happy.

Boat Frame

The plug for the hull mold is being fabricated out of a styrofoam model given to us by one of our Joinville colleagues, that just graduated as a naval engineer. Everything is being done in the models lab, LabMod. The styrofoam model was covered in plaster and sanded to a smooth finish, converting it into a solid plug. It was then used to laminate a female fiberglass mold. We are now in the process of using the mold to derive both of our hulls. 
Êmili is the one making sure our hull comes out nice and strong. The team expects the final boat frame to look pretty good, not bad at all for a bunch of mechatronics nerds. We are still figuring out the best configuration for the middle platform, and should be able to integrate the electromechanicals and electronics into the frame in a matter of weeks.
The hull plug during sanding…


..and the hulls after being laminated inside the negative


The station desktop program can now edit navigation routes and has received general polish. It did not change much though. The meatier part is the new simulation mode. I’ll talk about it shortly.
Most changes went into the drone firmware, as it acquired more capabilities and is now approaching a water-ready state. The original software architecture did not change at all, since it has proven to be a viable base for implementing all the required features for the prototype. 
One step back is that it was not possible to use a separate thread for driving the I2C bus, and by extenion, the OLED screen. This is unfortunate as the screen update rate was already qslow. Now it has to be driven opportunistically, during slack time in the main thread. This will be investigated further, if time allows. It is not an essential feature of the project, but if the need for removing the screen ever comes, its bling factor will surely be missed.

Control and Simulation

Nothing of the above would amount to much if the boat could not fulfill it’s main purpose: autonomous navigation! Alright, we’d still have a cool boat. Anyway, we want a control loop running in the firmware. 

I decided to implement it as a sort of state-machine control loop multiplexer thing. For every state, there is a basic behavior, that in a certain condition might advance the simulation towards the next logical step. This made control much simpler to implement, in place of having a monolithic, very complex controller.

There is also a simulation mode embedded into the station application, which can override some sensors and actuators according to a simplistic model, for testing.

Next Steps

This project is big. Like, huge. A lot of stuff. I had no idea what I was trowing myself into. But yeah, that’s life. We’ll manage.

Huge thanks to my teammates for putting up with everything and remaining calm, collected and motivated.

The time for presenting everything is approaching fast. Let’s see how this will turn out. Until next time!

Blog Uncategorized

The Hydrus Project

It has become traditional for the Brazilian Symposium on Computing Systems Engineering to be held together with the Intel Embedded Systems Contest. For the contest’s 2016 edition, I have teamed up with a couple of friends, Guilherme Pangratz and Êmili Bohrer, and my mentor, Giovani Gracioli, to come up with an exciting entry. Back in April, the project was approved, and development is going on full-steam, so I thought it was time to blog about it.

The proposal

The state of basic sanitation in Brazil is dire in many places. By investigating ways of improving our ability to detect irregular sewage dumping or other changes in water quality in bays and reservoirs, we decided to create an aquatic drone that navigates and measures water quality autonomously. We named the drone and the project after Hydrus, the “male water snake” constellation of the southern sky.

Our drone is to based on the Galileo Gen 2 board, and should be able to leave its base station, autonomously go to set waypoints, acquire some data, and return home.

Current status

Base station application prototype
  • Hardware: We have created power, GPS and frontend boards for the boat. The frontend has indicator LEDs and a shutdown button, as well as an AMOLED screen for displaying essential system information. The power board contains a very basic power backbone for distributing power to the electronic motor controllers, a battery level sensing circuit, and a simple +5V output for powering the sensors.
  • Software: The base for our firmware is a cyclic executive task scheduler with a scheduling table, and a global blackboard data structure. There are tasks for system management, navigation, sensing, and communication. Parsing of GPS navigational data is done, as well as motor control and vectoring. Communication with the base station is partially implemented.
  • Base station software: as of now, the application for the base station can render the current drone position, and display the essential information about the drone, in real time. It can render a preset route atop the map. Missing is the ability to edit the route from the map itself.

What’s next

There is still a ton of work ahead of us! We haven’t yet received the sensors we ordered, when they arrive they need to be integrated and tested. We are finishing work on a test frame, so that we can implement the navigation next, based on sensor fusion algorithms and a control state machine. Exciting stuff!

Blog Uncategorized

dw Engine Update Two

dw Engine is my project on QML-based game facilities that extend Qt to form a 2D Game Engine for desktop and mobile. I have been taking my sweet time bringing the engine to higher standards of ease of usage and performance and I would like to share some of it and some thoughts on Qt performance and suitability for such a project.

New Graphics and New Test Level

Tiles were painted on Krita
I decided to take the plunge and convert the demostration game to HD. Along with that came a new, original test level, new sprites for the common gameplay objects, most painted in Krita, and a few rendered in Blender. Sonic’s 60-something poses still need to be remade though. Maybe I can get some help with that…


Texture and Spritesheet Managment

The most noticeable problem during the transition to HD was the constant uploading of textures to the GPU. It seemed to happen whenever an Image item was made visible, or when the properties of an Animated Sprite object changed. This was destroying the framerate on mobile, and highlighted the need for more explicit control of the texture lifecycle if one wants to make an action game in QML.  A solution was to create specialized classes (dwTextureCache, dwTexture, and dwImageItem) for managing and displaying textures on the screen. It already supports online conversion of texture to 16bpp and allows for future extension to support texture compression. This will be important on mobile because the game’s rendering performance is constrained mostly by memory badwidth limitations, and compression helps with that greatly.
New spritesheet system is simple to use
There was also the need to replace the usage of AnimatedSprite. It has a cumbersome interface and most of the problems of Image. So a new subsystem was introduced with a Spritesheet class and a cache for it, just like with the textures, that describe the animations contained in a texture atlas in terms of sequences. They can have various properties, such as parametric animation speeds and automatic transition to other sequences. Spritesheets are described by a JSON file in a simple format.
Sprite instances are a specialized subclass of dwImageItem, that keep track of the animation data and are updated by a centralized animation controller. It keeps track of multiple animation domains and allows animation control in a global fashion.

Level Editor

The in-game level editor that was mostly stuck in development hell was rethought and is now good enough for my use case. It allows editing of the object layout for a stage, and has specialized modes for adding tile and geometry object types. It is modular and is not even loaded if a level is not started in debug mode, but otherwise can put the entire field in “edit mode” instantly by a single press of the escape key.
Editing the test stage mid-gameplay

Other Small Stuff

Support for game controllers was added, on all desktop platforms, courtesy of SDL2. This was thoroughly tested using my trusty 360 controller, and fleetingly using a couple other controllers as well. It should work on mobile too but this was not tested. 
Moreover, there was a need to render water as a simple colored quad that would multiply the colors underneath it, simulating what a 90’s console could accomplish via palette changes mid-hblank. However, there are no blending modes available in QtQuick besides common alpha blending. The way to circumvent this was to create a special QSGMaterialShader subclass that executes custom OpenGL code upon activation, and change the blending mode this way. Thanks to Giuseppe D’Angelo from KDAB for this great tip! To use this material an entire chain of classes had to be created culminating in a special node type. I plan to extend this special node in the future to allow it to render arbitrary textures in arbitrary blending modes, and maybe even point particles.

QtQuick and Performance

Sometimes I wonder if I have chosen the right platform for this project by choosing Qt. Certainly QtQuick is more than sufficient for creating casual games and even some more involved examples, but is it ready for a platformer with sprawling levels and tens of objects onscreen at once? After having to implement a lot of custom infrastructure that replaces core use cases of QtQuick (images, sprites, etc), I would say no, it is not. 
Object creation is very slow, frame sync is a bit wonky on all platforms I test except for Android, the JavaScript engine is a source of major and unexplainable frame time variability, and so on and so forth. I still need to implement some sort of object pooling, otherwise frame skips are going to be a frequent and sore sight on an eventual mobile release.
However, even with all those shortcomings, QtQuick and QML allowed me to accomplish far more than I expected initially when beginning this project. If one understands some of the QML engine’s inner workings and good patterns for performance, the level of achieved productivity can be high. And there is the undisputable truth, right here in front of me, that the QSG renderer can churn out 150 frames per second at 1080p on Intel on Mesa. It is a 2D game, sure, but these are performance levels that I would expect from a game engine.
I guess at some point I should go see what can be done with Qt3D… 🙂
Blog Uncategorized

Microcontroller and Instrumentation Experiments

Holidays are still going strong here in Rio. But I guess as a result of some subconscious new year resolutions, it is finally time to clear up the publishing queue a little bit. Blogging has been slow because things have been moving past me alarmingly fast.

From the top of my mind, the first thing that comes up is that I presented a paper at SBESC 2015 in Foz do Iguaçú (link to it when the proceedings are out). A huge shout-out to my teacher and mentor Giovani Gracioli is in order: every single meetup or thing we do together is a lesson. Thank you so much.

This post is not about proper research activities though, that is coming up at a later date. Now I’d like to ramble about some experiments done in two courses: Microcontrollers and Instrumentation. Working within the limitations of the Tiva C series Launchpad by TI as an interface board with the PC, my tasks were to interface with varied low-cost sensors, trying to build valid instruments out of the setup, and testing them. Being the Qt junkie that I am, for each of them there is a Qt/QML computer program that provides the interface.  Here they are:

Sound Frequency Meter

Trying to make sense of noisy input
The first project goal was to construct an useful measurement instrument out of a microphone breakout board not unlike this one. It did not have an analog-out though, so some analysis and pin soldering was needed.  I decided to go for a frequency meter application because the necessary algorithms (FFTs) are readily available in good quality. The chosen library to do the decoding of the audio data was libfftw.
In the board firmware, a simple program samples the ADC at 22050Hz via a timer. The most significant 8 bits are then sent to the PC via serial. It is a crude mechanism. An improvement would be using the USB device interface for proper sound capture. However the computer application takes care of interpreting the data as audio and analysing it. There are options to normalize the input for display and also turn off the Hann Window used in filtering the input.
All the analyzed data is drawn via QML’s Canvas element and some simple Javascript, both the input waveform and the representation in the frequency domain.
A report (in Portuguese, sorry) was written with more details.

Rotation Speed Meter

The complete test setup
On a second Instrumentation task, I was given a very simple rotary quadrature encoder, and a brushed permanent-magnet servomotor to go with it. My task was to assemble both together somehow, and use the encoder to measure the current speed of the motor.
The coupling of the motor is something quickly put together in the course of a day in the fabrication lab. The most interesting part of the assembly process was the manual machining of the axle coupling with a lathe.  
The capture board was required to do more this time, taking care of debouncing and interpreting the state changes of the input signal. There are hardware peripherals on the Tiva (QEI) specifically for interfacing with such encoders, however they could not be used successfully because the cheap encoder was too noisy for the QEI to handle. So debouncing was implemented in software and the output to the PC was in the form of ‘L’ and ‘R’ character pulses to indicate that the encoder moved. The implementation for this is based off of some Arduino code I found around the web. It was used because I thought the event transition scheme of the event machine was very elegant and (cursorily) performant.
The PC application used the skeleton of the previous one and is much simpler this time, counting pulses in a period of time and estimating rpm accordingly. The report goes as far as this point in development. However, as some sort of project epilogue, An H-Bridge came into play. and some rudimentary speed control for the motor was assembled and implemented. That was the most fun part of it all.

Camera Interfacing

Using friends as models
For the Microcontrollers course, my task was to interface with a VGA camera with a parallel interface (and I²C for control), the OV7670 module. The module does not have a framebuffer and the transfer data rate is too much for the serial interface. Since I was trying to avoid all the hassle of the USB interface because I didn’t have much time, I decided to try and see what could be done without it and without DMA to keep things simple. Turns out it was possible to fit a 8bpp QQVGA picture in the device’s RAM as a framebuffer. With it, it was not necessary for the serial data rate to be very fast (and unreliable). Capturing would then be done in a scan-transmit serial fashion.
The PC viewfinder used the same serial access class wrapped in a camera interface, but a whole different UI code, using QGraphicsView, which was deemed easier for this application. A button can be pressed to save the image too. 
Around 1.7 fps were achieved using this scheme. Can we do better with the Tiva? Using USB and DMA, I believe so. But is the Tiva’s main processor fast enough to stream a full VGA frame, in color, and at 30fps? I wouldn’t bet on it. Hardware engineers are probably cringing right about now thinking of this setup. This is the kind of development task an FPGA is expected to excel at. Then again, just that it ultimately works is in itself an interesting thing.

Source Code

The source code for these experiments is in this repository. Hopefully It can help someone that needs to hack something quickly together using Qt and the serial port to interface with custom hardware.
That’s for today. Happy New Year!!! 😀
Blog Uncategorized

TampereGotchi on GitHub

It’s about time! After almost a year past the project completion at the University of Tampere, the source code was left bitrotting in my (and my colleague’s) hard drives. Since then I remembered countless times that I had to upload and blog about it and fix a couple things on it but never had the time. Hopefully it’s not too late.

TampereGotchi is a finland-themed clone of the most popular franchise of virtual pets.
Basic actions like feeding, cleaning, and playing game swith your pet
are implemented. Also sending and receiving pets to a server and sharing
it via a code.
The game is not finished and likely never will be! But it is a great
testbed for playing with Qt 5 and QML.

I am particularly proud of the polish that went into the main screen and some other tidbits, and the spaceship minigame. Also Joona and Ammar did a great job with the sprites. Thanks to our PMs and teachers at the University of Tampere that made this school project something special and very fun to work in!

The repository with the code and assets is available at Hopefully some will clone it and mess with it a bit.

An APK built for armv7 processors is available here.

Blog Uncategorized

Back to Brazil: LISHA and other tidbits

Exchange period is finally over. Truth be told, it has been over for almost two months! But it feels like only now I have got my bearings back again. It is not easy leaving Finland, what an awesome country!

Just as I was doing before leaving more than a year ago, I am back at work at LISHA, the Hardware and Software Integration Laboratory at Santa Catarina Federal University (UFSC), whilst working on graduating as a Mechatronics engineer. Under the supervision of Dr. Prof. Giovani Gracioli, my current filed of research are resource synchronization protocols for mutiprocessor real-time operating systems (RTOSs), or more specifically, EPOS.

Working at such high level research is very challenging work. Reading and reviewing articles that represent the top contributions in the field of RTOSs is mandatory for background research, and my current focus. Soon we shall be moving to algorithm modelling.

On a lighter note, I am also doing regular courses from the programme and my favorite so far has been Microcontrollers. We are exploring, one by one, every peripheral and interface of the Tiva C Series TM4C123GH6PM Microcontroller (using the LaunchPad development platform). It struck me how much functionality was crammed into that IC. The CCS development environment could deliver a better experience, especially on Linux, but thankfully one only needs a standard cross compiler gcc toolchain (arm-eabi-none) to develop for the board. TivaWare including all the necessary makefiles also helps a lot.

I plan to blog about the course’s final project and any other stuff that ends up being done with the board. Also if there is any progress on dw Engine I shall post about it. A friend and me are in the process of sketching up an original level and ship a demo. Let’s see what happens!

Blog Uncategorized

Presenting neiatree

Mandatory Screenshot

neiatree is an asset tree processing tool for games of other multimedia applications. I made it after I counldn’t find something similar that was simple to setup and use like I wanted, so why not scratch my own itch? By this naming convention following after neiasound I guess I’m starting a neiasomething library collection. neiaframeworks, perhaps? Hmm…  

So, about neiatree. It allows you to parse a directory tree into another, optionally processing the files into other files. Inspired by make, also keeps track of source file modification dates and updates only what has changed. So you can for example, compress textures and sounds for your game as a build step that runs automatically, before or after a build or a run. Want to use different compression settings for your tool? Not to worry, change the rules and clean your destination folder. If you did not change anything, the overhead in project build time is negligible. A tenth of a second or so.

It is easy to integrate it into your toolchain. The only build dependency is Qt 5.4. You can use an older version with simple tweaks.

Licensed under 2-clause BSD. Go check the GitHub repository!

Blog Uncategorized

Intel RealSense for Mobile Devices: Aftermath
Geting, Rauli, Me 🙂

It is hard to believe the project is over already, and I did not post anything about it apart from a passing mention when the KoneCranes App project was concluded. Perhaps it is because our great PM Cyndi took care of keeping a nice and steady blog about all of our activities, productive and social. Here it is.

I can’t think of anything but praise for the team. We simply work great together, top to bottom, in a fashion I have only fleetingly experienced before. Even if the project turned out to be crap, at least we would have made some good friends. Alas, it turned out our project was in the top 4 shortlist, and we competed head-to-head for the win! Nope, we did not win. So close…

What we ended up with as deliverables was mainly documentation for ideas for future RealSense applications, and a proof of concept implementation of 3D camera usage for unique party experiences. Those are to be licensed to Intel, so meh, I can’t post anything here 😉

If I had to distill a main project takeaway from a technical perspective, it would be that Intel is right on the money. The “future” mobile experiences will be defined by contextual computing, and quality of implementation matters. Specialized hardware with good software middleware is must for developers, given the sheer amount of heterogeneous solutions that need to come together to give a contextual computing architecture shape and protocols. This is a very exciting time for developers. Our camping knives are being sharpened for us, our tents will raise by themselves.

Let’s explore!

Blog Uncategorized

Projective Game Platform for Public Spaces

Hello everyone! 🙂

My second and final semester at the University of Tampere has just come to an end. I am very thankful for being invited to this great instituition. The ambiance and facilities are just as excellent as the staff and quality of teaching. This post is about my project for the Human-Technology Interaction Project course at the School of Information Sciences.

It is a proof-of-concept for a gaming platform that could be applied to public spaces, using multiple projection and the original Microsoft Kinect. Not terribly original, but functional, and quite fun! Check out footage from our user testing: (that means party at my room :D)

It depends on Qt 5.4, OpenAL (it embeds a version of neiasound), OpenNI2, and NiTE2. Code is available here, on GitHub.