Coding Neural Networks in Carbon: Part 1

New language, new possibilities

Mwanikii
4 min readAug 5, 2022
Photo by Markus Spiske on Unsplash

Google has recently been touting its new language as the successor of C++. It has gone as far as making great strides to ensure that the language can easily be integrated with C++ on top of other features.

Sadly, according to the interactions I have had with those maintaining its Github repository, the language is not ready to build anything for production. This is not enough to deter me though. The more I explore, the more I can contribute to the betterment of the language. So could you.

So, I decided to attempt to manually write a neural net. I know there are libraries to do this and that others might write this off as a lackluster and fruitless activity. I would like to emphasize that the essence of this exercise is to grow your knowledge of neural-nets and develop intuition about the language in question and its features.

Now. Let’s code. Ensure that you have the carbon-lang up and running on your system. Here’s my tutorial to do that:

The first step would be to create a new file and directory where we will do all the work inside the Carbon directory. (Will be in the home directory by default in Linux).

$ cd carbon-lang/explorer/testdata

After opening this directory, then we make a new folder where we can build our neural-net:

$ mkdir neural_nets

We then proceed to the next step of making a file where we will write our code.

$ cd neural_nets
$ touch basic_neural_net.carbon

This will make the file we want and now we can navigate to that file and open it with any text editor.

Intuition

y = mx + c

The equation above is arguably one of the most ubiquitous equations around. But what does it have to do with our work here?

This might be a massive oversimplification but neural nets take in data(represented by x) and this data is multiplied by a given weight(m) and then it is added to a bias(c). The result is the output(y).

A neuron by Cornell University

Neural networks perform exceptionally because there are equations that automatically tweak the weights and biases to provide the best possible output by learning from the input (x).

This is a great oversimplification but my intention is to build up slowly.

package ExplorerTest api;

That is the first line we should write in order for the code to be run through the explorer provided by Carbon.

We will then build two arrays. One will contain the inputs while the other will contain the weights. They will contain an equal number of elements as the number of weights corresponds to the number of inputs as seen in the image above. There will only be one bias though because they are all going in through the same point.

package ExplorerTest api;//declare a function called Main that returns an int value
fn Main() -> i32 {
var inputs: [i32; 3] = (1, 2, 3);
var weights: [i32; 3] = (3, 2, 8);
var bias: i32 = 3;
}

We have declared two arrays of size 3 that take int elements because those are the only data type currently supported for the operation we are trying to perform.

We shall then multiply each input to the corresponding weight below and cumulatively add all of them together and add the bias after that.

//the complete code should look like this
package ExplorerTest api;
fn Main() -> i32 {
var inputs: [i32; 3] = (1, 2, 3);
var weights: [i32; 3] = (3, 2, 8);
var bias: i32 = 3;

var output: i32 = inputs[0] * weights[0] +
inputs[1] * weights[1] +
inputs[2] * weights[2] + bias;
Print("The output is:{0}", output);
return 0;
}

The next step is to run it by going to the carbon directory and running the following:

$ cd carbon-lang
$ bazel run //explorer -- ./explorer/testdata/neural_nets/basic_neural_net.carbon
Image from the author

With the values that we have input, we expect it to give us 34 as the answer. Input your own values and check whether the calculation is as intended.

Here we have demonstrated a very simple idea but it is a description of what goes on under the hood. Carbon does not have proper support for other data types such as floats whenever someone wants to do such calculations but it is my belief they will be integrated later on.

Stick around as we attempt to build a neural-net in this new programming language.

--

--

Mwanikii

Writer. Techie. History buff. If it changes the world I’m on its case. Open for gigs… freddynjagi@gmail.com! Published by the Writing Cooperative.