Physics simulation using Bullet

After setting up the Bullet library in previous post, now we will move onto the first program of Bullet. In this program I will create a plane and will add some rigid bodies to the scene. Rigid bodies are such objects which do not get deformed when they collide with something else. Most of the trees, building, stones, etc will be rigit bodies in a 3D scene. Bullet provides a class btRigidBody for defining rigid bodies. For our program we are going to use spheres as our rigid bodies, the reason for choosing spheres is, they are the easiest objects when it comes to calculating collision detections. You need only two pieces of information when you want to detect the collision detection, the radius and the position. bound2 What I have done is created a class called bullet inside my tools library, this will take care of all the physics that I will deal with throughout the project.

class bullet 
{
public:
    bullet();
    ~bullet();
    btStaticPlaneShape* plane;                    //infinite plane for tests
    std::vector<btRigidBody*> bodies;            //all the rigid bodies
    
    btClock m_clock;
    btDynamicsWorld* world;                        //every physical object go to the world
    btDispatcher* dispatcher;                    //what collision algorithm to use?
    btCollisionConfiguration* collisionConfig;    //what collision algorithm to use?
    btBroadphaseInterface* broadphase;            //should Bullet examine every object, or just what close to each other
    btConstraintSolver* solver;                    //solve collisions, apply forces, impulses
    
    void createInfinitePlaneAtOrigin();

    void bullet_stepSimulation();
    btRigidBody* addSphere(float ,float ,float ,float ,float );
};

The main program can be divided into four different parts for simplicity. Here are the parts of our first Bullet physics program- 1- initializations : all the bullet initializations will be done here. 2- creating a plane : create a plane shape. (i will explain what it actually means below in detail) 3- adding spheres. 4- rendering spheres & plane. Explaining adding spheres & planes: When you run your graphics engine without any physics all you’re concerned about is the 3D world that you’re rendering your things in. You’re not bothered about the collisions of the objects in your scene. All you care about is the shading of your objects i.e. how your object looks. The moment you add some physics library in your scene, you create another world. Think of it as a parallal universe to the 3D world that you’re building. This parallal universe will be responsible for all the calculations of the positions of your objects based on the physics. This world, lets call it PHYSICS world, is not bothered about HOW your objects are shaded. In this world we are strictly concerned with the position of the object, mass, and inertia of the object. In our application we will maintain a vector (btRigidBody) of all the shapes that we have in the 3D. We will use this vector for all the collision detections in the scene. So when you add a plane or spheres, you’re actually adding that to the vector. Bullet provides us with different shapes that we can add to our scene. In our program we will use two shapes provided by bullet, plane and sphere. When we add any shape to the vector i mentioned above, we also specify its position, mass, inertia into btMotionState object which is then passed to btRigidBody and then its finally added to the vector. Here is a sample code to add a rigid body for your physics.

btRigidBody* bullet::addSphere(float rad, float x,float y,float z,float mass)
{
    btTransform t;    //position and rotation
    t.setIdentity();
    t.setOrigin(btVector3(x,y,z));    //put it to x,y,z coordinates
    btSphereShape* sphere=new btSphereShape(rad);    //it's a sphere, so use sphereshape
    btVector3 inertia(0,0,0);    //inertia is 0,0,0 for static object, else
    if(mass!=0.0)
        sphere->calculateLocalInertia(mass,inertia);    //it can be determined by this function (for all kind of shapes)
    
    btMotionState* motion=new btDefaultMotionState(t);    //set the position (and motion)
    btRigidBody::btRigidBodyConstructionInfo info(mass,motion,sphere,inertia);    //create the constructioninfo, you can create multiple bodies with the same info
    btRigidBody* body=new btRigidBody(info);    //let's create the body itself
    world->addRigidBody(body);    //and let the world know about it
    bodies.push_back(body);    //to be easier to clean, I store them a vector
    return body;
}
// code: https://www.youtube.com/user/thecplusplusguy

Once you add a shape, you can finally go onto rendering your object. Rendering a shape is not at all different from earlier, but with bullet, you need to ask the physics engine about the position of the shape. Because it is the bullet lib which is going to calculate new positions of your object based on the physics interactions. Speaking in terms of matrices, bullet library will give you the translation matrix for your object.

    float mat[16]={0};
    btTransform t;
    body->getMotionState()->getWorldTransform(t);
    t.getOpenGLMatrix(mat);

You get the btTransform object of the shape (body) that you want to query using getWorldTransform(). The btTransform has a method to get a matrix of the object in OpenGL format. You send an empty matrix to this function, the function will fill up the matrix and then you can use it in your program. If you’re working on fixed functionality pipeline you can simply use the matrix returned like this->

    glPushMatrix();
        glMultMatrixf(mat);    //translation,rotation
        // render the shape
    glPopMatrix();

But if you’re working in programmable pipeline like myself, you will need to get transpose of the matrix, you then multiply the new matrix with view and projection and then send it to shaders. (MVPs are usually send to shaders using uniforms) After this you can render your object like you were doing before and there will be proper collosion detection between different objects. You can call addSphere method on a keypress, this will give the effect of firing a bullet. In my scene I am adding a new shape where my camera is placed, I am also giving some initial velocity to the object.

PS: This program is based on a bullet physics tutorial by The CplusPlus guy. All the documentation for the bullet classes and functions is provided on documentation section on bullet webpage.

I would also like to add a couple of mistakes which I did, which ‘rendered’ my code useless-
1- I was adding spheres inside the display loop which caused program to slow down.
2- I was not resetting the model matrix when I was calculating positions of a new sphere.

Setting up Bullet on Visual Studio 2010 and beyond

Introduction-

At some stage during your game development endeavours, you will come across a situation where you will want to implement a physics simulation module in your engine.

For those people who make their games the “easy” way, i.e. using Unity3D/Blender, etc, the physics library comes with their softwares.
But for real game programmers, who are using low level APIs such as, OpenGL or DirectX , you will either have to setup your own physics library, or use libraries created by other people.

To create and implement your own library you will need to be good at physics, and you must know the intricacies of the subject. Which will need a work of several years, but if you want a quick solution you can use libraries already made by others.

When you want to start working on physics in 3D, you’re presented with two main options,

  • PhysX library (NVIDIA)
  • Bullet Physics library (Erwin Coumans, et al.)

PhysX is NVIDIA’s proprietary library and will work only on NVIDIA graphics cards. If you have an AMD GPU like me, you will have to go with Bullet.
Bullet is an open source library created and maintained by Erwin Courmans.
Bullet library is used for many games, most famous ones being, Crysis, GTA. It’s also been used in several movies, Yes!!, I said movies, 2012 by Digital Domain, Hancock by Sony Pictures, Bolt by Walt Disney, are the ones that come to mind.
Along with this, it’s been used in many 3D modelling softwares. Blender and Unity use bullet for their physics simulations.

Now, let’s have a look at how you can include this library in your projects.

Prerequisites

  • You have a working 3D OpenGL/DirectX application.
  • You know how to use Visual Studio, and comfortable with setting up libraries.

Bullet doesn’t come like some of the other libraries that you may have used, e.g. if you download GLUT, { you’re awesome if you don’t use GLUT, and use WIN32 API to create windows for your 3D programs :) } you see 3 standard folders, include, lib, bin.
They are quite self-explanatory, include folder path you give to your projects include path for header files, lib folder contains the library files created by the build, which you tell compiler using #pragma, and bin path contains the DLL needed for GLUT.

When you extract bullet from its ZIP file you will not find any of these. It’s a bit confusing at start, but once you get hang of it, it’s quite easy.

Now, we can directly go through the process step-by-step, and I will try to explain issues that I came across while doing so. Hopefully these might solve any issues that you might be getting.

  • Download the latest bullet build from its website.
  • Extract the lib
  • Download CMake: you will need this library to build bullet for your applications.
  • Follow the steps given here. Although they further ask you to open VS, do not open VS.
  • Once you done configuring and generating the projects, open VS
  • In your OpenGL/DX project, right click solution->add->existing project.
  • Navigate to where you build the bullet lib using CMake. For me it was-
    C:\GLLibs\bullet-2.81-rev2613\build2
  • You will find src folder here, go inside that, and add the 3 main projects needed for bullet to execute
    1. BulletCollision
    2. BulletDynamics
    3. LinearMath
    4. 1- Now right click your main project, and click on Project dependencies, select the 3 projects.2345345
      8.2- Now go to the properties of each of the 3 projects BulletCollision, BulletDynamics, LinearMath, make sure the Librarian->Link Library Dependencies is set to YES.4sdfsd

If at this step, you see a dialog box saying “the project file ‘’ has been renamed or removed” you will have to delete the build and start over again. It means that you have messed up one of the .sln files for a project. Its hard to find for which project its been messed up, so its easier to start from beginning.

  • Now go to your main project properties, in the Linker->Input section add the Additional Dependencies of the lib files generated in the folder lib.
    for me it was- “C:\GLLibs\bullet-2.81-rev2613\build2\lib\Debug”Capture
  • Add the include ‘src’ path for bullet.
  • 1244

Add the code for Hello World Bullet from here; And Finally hit the build solution button.

If you’re in luck, you will be seeing that beautiful message,

========= Build: 1 succeeded, 0 failed, 3 up-to-date, 0 skipped ==========

One thing to keep in mind while setting up bullet is, it’s one of the tough libraries to link with your project successfully. It took me 2 days to set it up properly. One of the comments which I came across while doing this was very interesting and I would like to mention it here,

“After about 4 and half hours I was able to get the bullet 2.82 compiled and running the demos. I still can’t get this program to compile after another 2 and half hours. Does it take this long for everyone else. I don’t know how people survive this. I have also spent about 5 hours trying to get OpenGL 3 + to compile on my computer… I am thinking the hardware won’t do it. This has been over the course of about 2 weeks. Sometimes I feel like I am just gong out to slam my balls in the car door again.”
– Some frustrated guy.

Shading Language Library : A C++ wrapper over GLSL

Those who are learning GLSL and who came from the fixed functionality pipeline, first few days of your learning GLSL must have been a nightmare, there are so many things that could go wrong while writing a GLSL program.
I went through the same phase about 4 months back when I started learning GLSL, after working on about half dozen small indie projects in the now deprecated fixed functionality pipeline.

The first weird thing that you come across when you start doing GLSL is that there is a separate compiler for GLSL, and you are the one who is supposed to compile, and link GLSL programs. Its quite overwhelming when you read it for first time, especially to people who are used to IDE’s, but its quite simple. These are mere function calls which do this thing for you.

Here are the steps which are required in order to make your GLSL program run..

You write your shaders put them in a text file, or a string (because they eventually will be retrieved from the file and passed to GLSL compiler using strings). I would recommend keeping your code in file, because its both easier to debug, and its always a good idea to modularise your code.

  1. load your code from file to strings
  2. create shader
  3. convert your strings into shader source
  4. compile shader
  5. create program
  6. attach your shader to a program
  7. link
  8. and use

I used these function calls, and found out, doing all these things every time you write a GLSL program takes a lot of your time,
lets face it, you’re here to do 3D programming, so I would rather concentrate on that instead of spending my time on these things.

So I decided to put the things which are common in a set of classes to make using shaders easier where there are multiple shader programs.
It sure did took me some time to do this, but now its ever easier to add a GLSL program to my existing app program.

Including dylibs in Xcode..

Troubles one might face while using dynamic libraries-

Coming from windows background, I thought using dynamic libraries in Xcode would be just as easy as copy dylibs files to project souce dir.. but no.. Xcode has some different ideas.

So, you added all the include folders, and you added the library paths in the project build settings.
You compile the program… NO ERRORS… you build it.. you see that BUILD WAS SUCCEEDED,

However, when you run it, you see something like this..

I hope you’re familiar with the things you see in the image above mean, whenever you build your project in Xcode, it create an app executable in the temporary folder for Xcode. This app file even though an executable with .app extension is  just a directory, you can right click on it, and say Show Package Contents, and you can browse files inside your app. If you copy it on windows, you will simply find a folder named.. explicit_gfx.app. The main exec file is inside this directory, .app file simply encapsulates all the frameworks, and other resources.

Anyhow coming back to the issue in hand, my program is looking for ./libfmodex.dylib but its not finding it in the executable app file. But when I go inside the app file, I see it there nicely sitting in _.app/Contents/Frameworks/ directory, what went wrong?
You might say file is there but the path is incorrect.

No, actually the thing is, the name, the id of the file needs to be changed in order to use it in your executable.
This is a real pain in the ___ if you’re new in Xcode.
Its was this error because of which I had to include a static library of some framework.

But i eventually got to know the source of the problem.

If you code regularly in Xcode environment or in OS X for that matter, you might have heard of utility called install_name_tool.
It is used to change the properties of the dylib file.
So in order to fix the issue in hand, you will have to use this command on your dylib.

How to do that? —

  • Open up terminal
  • install_name_tool -id @rpath/libfmodex.dylib /[path_of_dylib]/libfmodex.dylib
  • install_name_tool -change libfmodex.dylib @rpath/libfmodex.dylib /[path_of_dylib]/libfmodex.dylib

This did trick for me, [-id] command changes the name to the name you supplied @path…. in this case, and [-change] changes path of one reference library.
You can check man pages for install_name_tool for further information.

Apple sandboxing friend or enemy

So, I was almost done testing the application that we wanted to release on mac app store, I tested that thing on almost 10 different mac machines, and it was working like a charm.

So, on the late Monday night we went to itunes connect and looked for things that we were gonna need to upload the application to the app store.
I have to say, apple are very damn strict about who uploads the app to the app store, and it took us nearly 2 days to go through all the stuff as it was our first time. Its not just about paying $99.0 to apple, there is more to it than just that.
But we did upload it eventually.

Certificates…. lots and lots of certificates, not just for the app but for also the developer who is working on the app.

And after that what you need is thing called developer profile, which took most of our time. A developer profile is a single file containing all the information needed to upload the app.
It contains the distribution profile, developer certificates, etc.

And.. with a lot of efforts it was done.
We had passed all the hurdles to submit the app to apple, but little did we know, that a storm was brewing at apple, they were gearing up with their weapons to go against us developers..
After submitting the application they came back to us almost immediately saying

1- the app was not sandboxed.
2- one of the library is compiled for ppc7400.

Sandboxing is easy, but after enabling that few features in our app were disabled, without giving any errors or warnings, which is very bad, because we had tested the application and it was working fine, we were not aware of the fact that we’ll have to test it again.

Whatever options that you see in Xcode 5 to handle entitlements they are not enough, if your application plays sound and does some other stuff, you’ll have to add your new entitlements to the sandboxing in order to enable that.

To enable the sound in our app, we had to put 3 different entitlements to the entitlement file.

This one allows your application to access the microphone.

com.apple.security.device.microphone

This allows talking to the MIDI Server which coordinates all the Midi functionality across applications.

com.apple.security.temporary-exception.mach-lookup.global-name

This allows talking to audio components which are not itself sandboxed.

com.apple.security.temporary-exception.audio-unit-host

Was it worth going through all this trouble, I don’t think so, sandboxing is a good option for mobile devices, but for iMacs and macbooks apps need to do many more things in order to run properly. C’mon even to access file you need to add that entitlement to sandboxing.
Its a good feature, but apple should release some set of entitlments which will be used by application commonly. [such as games need sound+file access+game controller{if any}, etc.]

If you’re getting issues like one of the library is compiled for ppc7400 and you’re not developing for ppc7400, you can remove that architecture form the dynamic library that is causing the issue.
You can remove the architecture from any dylib using lipo command with -remove flag.. it is used as follow..

 sudo lipo libfmodex.dylib -remove ppc7400 -output libfmodexx.dylib

 lipo [input_file] -remove [architecture to remove] -output [output_file]

Happy Coding folks!

How to send basic data to GPU in OpenGL?

For those who came from the fixed functionality pipeline background, and started learning OpenGL 2.0+ (aka GLSL), first few days must have been very confusing. Especially with all the VAO and VBO, etc.
Well, I learned it all by doing mistakes, and rising from them, but let’s make things simpler for those who are new to this glsl world.

First things first, whatever you render onto the screen is with the help of your GPU, (aka OpenGL server), be it the desktop GUI environment you see, or any fancy game you’re playing.

So, how exactly data is passed to the GPU, and how exactly is it stored in the GPU itself? Every GPU these days has a specific amount of buffers. These buffers are nothing but chunks of memory in the GPU which is used to store all the data passed to GPU via OpenGL.

Now given that you can store data in GPU using buffers, you cannot simply use them as pointers, but you will have to use Buffer Objects for that. Here is the official definition of buffer objects according to OpenGL wiki [Buffer Objects are OpenGL Objects that store an array of unformatted memory allocated by the OpenGL context (aka: the GPU). These can be used to store vertex datapixel data retrieved from images or the framebuffer, and a variety of other things.]

Ok, so thats about buffer objects, but there is one more term.. called Vertex Buffer Objects.. The buffer objects that deal with Vertex in shaders are called Vertex Buffer Objects.  OpenGL provides very simple (simple? not really, you might need to read their specifications again and again) functions to create and manipulate GLSL buffers. But for now let us understand the theory behind it.

Whenever you use any buffer object you will have to bind it, so OpenGL will know on what buffer it has to work on for the next few statements. The currently bounded buffer will be bound unless you bind another buffer or pass 0 parameter to glBindBuffer() method. We’ll talk more about binding of a buffer and target specified by buffer later on.

Some buffers, specifically if they’re Vertex Buffers they will need Vertex Array Objects to for the vertex specification. Its nothing but specifying how the data in the buffer is mapped out. According to OpenGL wiki,

[A Vertex Array Object (VAO) is an OpenGL Object that encapsulates all of the state needed to specify vertex data (with one minor exception noted below). They define the format of the vertex data as well as the sources for the vertex arrays. Note that VAOs do not contain the arrays themselves; the arrays are stored in Buffer Objects. The VAOs simply reference already existing buffer objects.]

Think of VAO as an authority which governs the VBO. (VBOs are just slaves) VAOs help specify where in the vertex shader the data for buffers is specified. VAOs can later be used for trasnform feedback and asyn pixel transfers. Some concepts that I still have to assimilate. But anyhow, thats all about the VBO and VAO, next up I will write about how to use them in your programs. and a sample code.

Link Freetype to your project statically in Xcode.

Linking FreeType to your project in Xcode for an OpenGL program.

So, fancy using text in your OpenGL code `eh? 

Anyone who has tried to render text in OpenGL will tell you, its not a piece of cake, you will have to pass some hurdles. They might also tell you, the best way to go about it is to use default text and font rendering, but you don’t want that ‘un’-anti-aliased text, what you’re looking for is a text which looks good, is anti-aliased, and can load any glyph from TTF file. Don’t know if this is the case for you, but I certainly wanted an anti-aliased text, if not a TTFed glyph. 
{you know who THEY are :P}

So, after searching a bit on internet and banging my head a bit on stackoverflow, I found out that there is this library called FreeType, which can render text, and is able to load any .ttf file you send to it. I downloaded it, went through the install notes.

Somehow, i was able to install it in my Xcode project using .dylib files that were created during make install, so I copied these files into my project source and gave appropriate paths, but what you know, when I created an archive it was still looking for dylib in the path where it was installed. I found out this when I deployed my project archive on clients machine. 

The archive failed with dylib NOT LOADED, reason: image not found, being new to mac development I didn’t know what was happening, so, after looking up on stackoverflow again I found out that relative path to the archive can be given using install_name_tool command on mac, so I used it, and replaced those paths, and deleted the freetype installation from my machine, just so I could test it.
It worked fine on my machine, [as it always does] but when I deployed the same archive to the client side, it failed, but there were no dylib issues this time. The problems were in the functions called from this dylib. It was EXC_BAD_INSTRUCTION. [btw, i am still looking for an ans to this :) ]
I couldn’t really found why it was happening, so I decided to link the freetype library statically. [with .a file]
Some people say you’ll have to create a workspace and stuff in Xcode in order to do this… i didn’t find any point in doing that so i simply included the static libraries in my Xcode and gave the right path.

Well, let’s get started with the steps on how to do that right away..

  1. compile freetype, make sure it generated .a file. [you should be able to find this on lib folder inside your ft home directory]
  2. copy .a file in your project dir if you want to run your project on other mac.
  3. Now you need to add this .a file to your project in Xcode, for that you go to build phrases in target and add your .a file to it.
    PS: the .a file is called libfreetype.a
  4. Next up you need to set flags for freetype, you can do this in other linker flags, the flags you need for FT are, -lfreetype -lz -lbz2
  5. Now you can build your project…. and BAMMM!!! what happened there? FT never works alone, like many police movies from Hollywood it always works with a partner, this time the partner being libpng.
  6. If you have libpng, well and good, just simply copy the .a file of libpng along with libfreetype.a file, and you’re good to go., else download and install libpng, or if you prefer a more geeky way, make/make install do that.
  7. And now build and run you’re project.

You will find that all your troubles have gone away :)