In Hollywood movies, artificial intelligence (A.I) is often portrayed as bad and we often witness a bunch of A.I powered robots that turn against their human creators, just as we have seen in movies such as *The Matrix*, *Terminator*, *iRobot* and more. There are a few exceptions were A.I is portrayed as good in movies such as in *Interstellar*, *Star Wars *and* Star Trek, *just to name a few. This negative portrayal of A.I has led to misconceptions about the main concerns that experts have about it. Long story short: the main concerns about A.I isn’t killer robots with Austrian…

This blog post is essentially about the gist of the research paper** “wh****y does deep and cheap learning work so well**?” written by Henry W. Lin, Max Tegmark (the author of the superbly wirtten *Our Mathematical Universe* and *Life 3.0* books) and David Rolnick. This paper explores why deep learning (DL) works so well and they have argued that the answer* *may not only lie in computer science and mathematics, but also in physics. Neural networks (neural nets or NNs) are the foundation of deep learning and they usually outperform virtually every machine learning algorithm out there. One major disadvantage…

In this blog post, I elaborate how someone who’s learning data science may increase their effectiveness based on the principles that I’ve learnt from a book called **The 7 Habits of Highly Effective People****.** I specifically focus on how someone might increase their contributions on websites like **GitHub**** **and** ****Kaggle**** **by adopting the 3 Habits discussed here. Contributing on such websites may effectively build the contributor’s reputation amongst the data science and/or the software development community. …

This is the final part of my blog post series on convolutional neural networks. Here are the pre-requisite parts for this post:

**Part 1: Edge Detection****Part 2: Padding and Strided Convolutions****Part 3: Convolutions Over Volume and The Convolutional Layer****Part 4: The Pooling and Fully Connected Layer**

So why are convolutions so useful and when can you include them in your neural networks?

There are two main advantages of using convolutional network layers:

**Parameter sharing****Sparsity of connections**.

This is the fourth part of my blog post series on convolutional neural networks. Here are the pre-requisite parts for this post:

**Part 1: Edge Detection****Part 2: Padding and Strided Convolutions****Part 3: Convolutions Over Volume and The Convolutional Layer**

The final part of the series explains why it might be a great idea to use convolutions in a neural network:

Other than convolutional layers, ConvNets often also use pooling layers to reduce the size of the representation, to speed the computation, as well as make some of the features that it detects a bit…

This is the third part of my blog post series on convolutional neural networks. Here are the pre-requisite parts for this blog post:

Here are the subsequent parts of this series:

A pre-requisite here is knowing matrix convolution, which I have briefly explained in the first section of **part one** of this series. Also knowing something about the **rectifier linear unit (ReLU)** activation function could help you understand some points that I mentioned in section 2 here.

In…

This is the second part of my blog post series on convolutional neural networks. Here are the subsequent parts of this series:

**Part 3: Convolutions Over Volume and The Convolutional Layer****Part 4: The Pooling and Fully Connected Layer****Part 5: Why Convolutions****?**

A pre-requisite here is knowing matrix convolution. I have briefly explained matrix convolution in the first section of **part one** of this series. Don’t be intimidated by the word “convolution” and the amount of matrices that you’ll see. I’ve used the asterisk to denote the convolution operator, not matrix multiplication. …

This is the first part of my blog post series on convolutional neural networks. Here are the subsequent parts of this series:

**Part 2: Padding and Strided Convolutions****Part 3: Convolutions Over Volume and The Convolutional Layer****Part 4: The Pooling and Fully Connected Layer****Part 5: Why Convolutions****?**

While doing an online convolutional neural network (CNN) course from the deep learning specialization on Coursera by Andrew Ng, I noticed that there are no slides, there are no lecture notes given and there is no prescribed textbook (besides, a deep learning textbook would be convoluted, no pun intended, for some…

Click here for the Jupyter notebook. NB: I adapted it from a Notebook created by Alex Aklson and Polong Lin. From the IBM Applied Data Science Capstone Project course Coursera.

The detailed report can be found from my github repo here

Johannesburg, informally known as Jozi, Joburg, or “The City of Gold”, is the largest city in South Africa and one of the 50 largest urban areas in the world¹. It is the provincial capital and largest city of Gauteng, which is the wealthiest province in South Africa. Johannesburg is the seat of the Constitutional Court, the highest court in…

I'm a mathematics and theoretical physics graduate. Interested in artificial intelligence/machine learning and data science.