I finished writing a Logistic Regression classifier in OpenCV. I've seen a lot of posts on the web asking for OpenCV's version of the same but its not available. Its easy to write your own logistic regression classifier.

I separated out the cost function and gradient descent algorithm (Batch Gradient Descent). You can replace it with your own optimization algorithm.

I posted it on my Github page. You can check it out:

## Sunday, June 9, 2013

## Saturday, March 9, 2013

### Reading and Writing cv::Mat in OpenCV in C++

In C++ api for OpenCV, you often come across writing and reading matrices as text files (analogous to saving .mat files in MATLAB/Octave). This is a simple way of doing it. You could even save multiple matrices in a single

However, I couldn't find ways to read more than 1 matrix which was written to file and being read using Python API.

**.xml/.yaml**file.**Note:**You could write matrices using C++ and use Python API (which is very useful for Prototyping as development on Python is faster than C++ and many Machine Learning APIs are available for use).

However, I couldn't find ways to read more than 1 matrix which was written to file and being read using Python API.

### Matrix types in using imshow and imwrite

I have never paid attention to trivial functions in OpenCV like

while using matrix types.

It turns out that only

For example, the following image has values from 0 to 1 (floating type), but when you write it to disk, all you can see is "NOTHING". That is so because, imwrite only writes 8 bit 3 channel or 1 channel images. Also when you do an imread on the same image, you get a 8bit image. Its not the same CV_32F image that you've written to disk.

Other formats of

while using matrix types.

It turns out that only

**3 Channel**or**Single Channel**images can be saved using**imwrite**(in specific 8bit images, and 16 bit images of**PNG**,**JPEG****2000**and**TIFF**type). Click here for more details.For example, the following image has values from 0 to 1 (floating type), but when you write it to disk, all you can see is "NOTHING". That is so because, imwrite only writes 8 bit 3 channel or 1 channel images. Also when you do an imread on the same image, you get a 8bit image. Its not the same CV_32F image that you've written to disk.

Other formats of

**cv::Mat**have to use FileStorage class provided with OpenCV. Using this one can save matrices of type CV_32F.
Labels:
c++,
computer vision,
OpenCV

Reactions: |

## Friday, February 15, 2013

### Logistic Regression in python

I wrote a Logistic Regression classisifier in Python using Numpy.

You can checkout the source code for the Logistic Regression classifier here

Its a

It uses

You can clone the source using the following command:

You can checkout the source code for the Logistic Regression classifier here

Its a

**multi-class classifier**written in Python (uses a one vs rest kind of classification strategy). It also works for 2 class data-sets.It uses

**Conjugate**/**Batch Gradient Descen**t to learn the parameters of the Logistic Regression. You can edit it to make it work with optimization module in**Scipy**.You can clone the source using the following command:

Labels:
computer vision,
machine learning,
Python

Reactions: |

## Sunday, January 27, 2013

### Machine Learning on Coursera

Late last year, I finished Andrew Ng's course on Machine Learning in www.Coursera.org. It was absolutely wonderful. I have learnt a lot from that course. I got a

Now, I have time to complete other interesting courses such as Probabilistic Graphical Models and Neural Networks for ML.

**perfect score of 100%**.
Finally, I overcame the procrastination to learn to program in Matlab/Octave effectively. I felt very comfortable in dealing with Octave. Previously, I thought Octave had very less features compared to Matlab. It is kind of true. But, you could manage writing code that could solve complex problems without spending much money on a Matlab License.

Andrew Ng has been absolutely phenomenal in explaining complex problems clearly. The new stuff that I learnt was:

- Logistic Regression.
- Neural Nets.
- Support Vector Machines.
- Debugging Machine Learning problems.
- Know how(theory) of using certain optimization algorithms on Map-Reduce.

My background in Pattern Recognition did definitely help in it. I was working in a related area of research, it helped in finishing my Master's Thesis too! Now I'm pretty confident on taking on any ML problem/assignment.

Now, I have time to complete other interesting courses such as Probabilistic Graphical Models and Neural Networks for ML.

Thanks Dr. Andrew Ng and Thank you Coursera!

And yeah, this my certificate.

Subscribe to:
Posts (Atom)