Map and reduce kernels and random functions.

Cuda runtime compilation In order to make any kind of machine learning algorithm work, the map operations are essential. Relu, sigmoid, tanh…for such functions and for more complex kinds like k-selection, it is necessary to apply map operations to the result of Ax+b. Before just last month I started working with ManagedCuda, most of my…

The dMatrix type and the multiply and add wrappers.

The dMatrix type Before anything can be done, the basic 2D matrix type has to be defined. It would be far too unsafe┬áto call Cuda library functions on raw Cuda matrices. Instead is very helpful to wrap them inside a class, or in this case a record which is similar to a class. The difference…

The first steps to running the library

64-bit Mode Assuming you have installed F#, the first step you should take would be to run F# Interactive in 64-bit mode. In VS go into Tools -> Options and just write F# in the search bar. Then go to F# Tools -> F# Interactive. Enable both debugging and the 64-bit mode. Debugging is for…

Installing Dependecies

The following tutorials are going to be written in F# and Cuda. Before we can start here is what is going to have to be installed: F#. I cannot advise on how to get┬áthis on Linux, but on Windows the easiest option would be to download Microsoft Visual Studio 2015. I would recommend instead of…

Introduction

Over the last five months I had done nothing but program neural nets. The purpose of that was not that I had anything specific in mind when I started, but merely to get good at writing efficient numerical code. It had its ups and downs and now I am think I am rather decent at…