## Recent Posts ▸ Regularized linear regression to study models with different bias-variance properties.

I have recently completed the Machine Learning course from Coursera by Andrew NG.

While doing the course we have to go through various quiz and assignments.

Here, I am sharing my solutions for the weekly assignments throughout the course.

These solutions are for reference only.

It is recommended that you should solve the assignments by yourself honestly then only it makes sense to complete the course.
But, In case you stuck in between, feel free to refer to the solutions provided by me.

#### NOTE:

Don't just copy paste the code for the sake of completion.
Even if you copy the code, make sure you understand the code first.

Click here to check out week-5 assignment solutions, Scroll down for the solutions for week-6 assignment.

In this exercise, you will implement regularized linear regression and use it to study models with different bias-variance properties. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.

It consist of the following files:
• ex5.m - Octave/MATLAB script that steps you through the exercise
• ex5data1.mat - Dataset
• submit.m - Submission script that sends your solutions to our servers
• featureNormalize.m - Feature normalization function
• fmincg.m - Function minimization routine (similar to fminunc)
• plotFit.m - Plot a polynomial fit
• trainLinearReg.m - Trains linear regression using your cost function
• Regularized linear regression cost function
• Generates a learning curve
• Maps data into polynomial feature space
• Generates a cross validation curve
• YouTube videos featuring Free IOT/ML tutorials
* indicates files you will need to complete

### linearRegCostFunction.m :

```function [J, grad] = linearRegCostFunction(X, y, theta, lambda)
%LINEARREGCOSTFUNCTION Compute cost and gradient for regularized linear
%regression with multiple variables
%   [J, grad] = LINEARREGCOSTFUNCTION(X, y, theta, lambda) computes the
%   cost of using theta as the parameter for linear regression to fit the
%   data points in X and y. Returns the cost in J and the gradient in grad

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly
J = 0;

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost and gradient of regularized linear
%               regression for a particular choice of theta.
%
%               You should set J to the cost and grad to the gradient.
%DIMENSIONS:
%   X = 12x2 = m x 1
%   y = 12x1 = m x 1
%   theta = 2x1 = (n+1) x 1
%   grad = 2x1 = (n+1) x 1

h_x = X * theta; % 12x1
J = (1/(2*m))*sum((h_x - y).^2) + (lambda/(2*m))*sum(theta(2:end).^2); % scalar

% grad(1) = (1/m)*sum((h_x-y).*X(:,1)); % scalar == 1x1
grad(1) = (1/m)*(X(:,1)'*(h_x-y)); % scalar == 1x1
grad(2:end) = (1/m)*(X(:,2:end)'*(h_x-y)) + (lambda/m)*theta(2:end); % n x 1
% =========================================================================

end```

### learningCurve.m :

```function [error_train, error_val] = ...
learningCurve(X, y, Xval, yval, lambda)
%LEARNINGCURVE Generates the train and cross validation set errors needed
%to plot a learning curve
%   [error_train, error_val] = ...
%       LEARNINGCURVE(X, y, Xval, yval, lambda) returns the train and
%       cross validation set errors for a learning curve. In particular,
%       it returns two vectors of the same length - error_train and
%       error_val. Then, error_train(i) contains the training error for
%       i examples (and similarly for error_val(i)).
%
%   In this function, you will compute the train and test errors for
%   dataset sizes from 1 up to m. In practice, when working with larger
%   datasets, you might want to do this in larger intervals.
%

% Number of training examples
m = size(X, 1);

% You need to return these values correctly
error_train = zeros(m, 1);
error_val   = zeros(m, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return training errors in
%               error_train and the cross validation errors in error_val.
%               i.e., error_train(i) and
%               error_val(i) should give you the errors
%               obtained after training on i examples.
%
% Note: You should evaluate the training error on the first i training
%       examples (i.e., X(1:i, :) and y(1:i)).
%
%       For the cross-validation error, you should instead evaluate on
%       the _entire_ cross validation set (Xval and yval).
%
% Note: If you are using your cost function (linearRegCostFunction)
%       to compute the training and cross validation error, you should
%       call the function with the lambda argument set to 0.
%       Do note that you will still need to use lambda when running
%       the training to obtain the theta parameters.
%
% Hint: You can loop over the examples with the following:
%
%       for i = 1:m
%           % Compute train/cross validation errors using training examples
%           % X(1:i, :) and y(1:i), storing the result in
%           % error_train(i) and error_val(i)
%           ....
%
%       end
%

% ---------------------- Sample Solution ----------------------

%DIMENSIONS:
%   error_train = m x 1
%   error_val   = m x 1

for i = 1:m
Xtrain = X(1:i,:);
ytrain = y(1:i);

theta = trainLinearReg(Xtrain, ytrain, lambda);

error_train(i) = linearRegCostFunction(Xtrain, ytrain, theta, 0); %for lambda = 0;
error_val(i)   = linearRegCostFunction(Xval, yval, theta, 0); %for lambda = 0;
end

% -------------------------------------------------------------

% =========================================================================

end```

### polyFeatures.m :

```function [X_poly] = polyFeatures(X, p)
%POLYFEATURES Maps X (1D vector) into the p-th power
%   [X_poly] = POLYFEATURES(X, p) takes a data matrix X (size m x 1) and
%   maps each example into its polynomial features where
%   X_poly(i, :) = [X(i) X(i).^2 X(i).^3 ...  X(i).^p];
%

% You need to return the following variables correctly.
X_poly = zeros(numel(X), p); % m x p

% ====================== YOUR CODE HERE ======================
% Instructions: Given a vector X, return a matrix X_poly where the p-th
%               column of X contains the values of X to the p-th power.
%
%
% Here, X does not include X0 == 1 column

%%%% WORKING: Using for loop %%%%%%
% for i = 1:p
%     X_poly(:,i) = X(:,1).^i;
% end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

X_poly(:,1:p) = X(:,1).^(1:p); % w/o for loop

% =========================================================================
end```

### validationCurve.m :

```function [lambda_vec, error_train, error_val] = ...
validationCurve(X, y, Xval, yval)
%VALIDATIONCURVE Generate the train and validation errors needed to
%plot a validation curve that we can use to select lambda
%   [lambda_vec, error_train, error_val] = ...
%       VALIDATIONCURVE(X, y, Xval, yval) returns the train
%       and validation errors (in error_train, error_val)
%       for different values of lambda. You are given the training set (X,
%       y) and validation set (Xval, yval).
%

% Selected values of lambda (you should not change this)
lambda_vec = [0 0.001 0.003 0.01 0.03 0.1 0.3 1 3 10]';

% You need to return these variables correctly.
error_train = zeros(length(lambda_vec), 1);
error_val = zeros(length(lambda_vec), 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return training errors in
%               error_train and the validation errors in error_val. The
%               vector lambda_vec contains the different lambda parameters
%               to use for each calculation of the errors, i.e,
%               error_train(i), and error_val(i) should give
%               you the errors obtained after training with
%               lambda = lambda_vec(i)
%
% Note: You can loop over lambda_vec with the following:
%
%       for i = 1:length(lambda_vec)
%           lambda = lambda_vec(i);
%           % Compute train / val errors when training linear
%           % regression with regularization parameter lambda
%           % You should store the result in error_train(i)
%           % and error_val(i)
%           ....
%
%       end
%
%

% Here, X & Xval are already including x0 i.e 1's column in it

m = size(X, 1);

%% %%%%% WORKING: BUT UNNECESSARY for loop for i is inovolved %%%%%%%%%%%
% for i = 1:m
%     for j = 1:length(lambda_vec);
%         lambda = lambda_vec(j);
%         Xtrain = X(1:i,:);
%         ytrain = y(1:i);
%
%         theta = trainLinearReg(Xtrain, ytrain, lambda);
%
%         error_train(j) = linearRegCostFunction(Xtrain, ytrain, theta, 0); % lambda = 0;
%         error_val(j)   = linearRegCostFunction(Xval, yval, theta, 0); % lambda = 0;
%     end
% end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% %%%%%%% WORKING: BUT UNNECESSARY for loop for i is inovolved %%%%%%%%%%%
% for j = 1:length(lambda_vec)
%     lambda = lambda_vec(j);
%     for i = 1:m
%         Xtrain = X(1:i,:);
%         ytrain = y(1:i);
%
%         theta = trainLinearReg(Xtrain, ytrain, lambda);
%
%         error_train(j) = linearRegCostFunction(Xtrain, ytrain, theta, 0); % lambda = 0;
%         error_val(j)   = linearRegCostFunction(Xval, yval, theta, 0); % lambda = 0;
%     end
% end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% %%% NOT WORKING: BUT UNNECESSARY for loop inside learningCurve function is inovolved %%%%%%
% for j = 1:length(lambda_vec)
%     lambda = lambda_vec(j);
%
%     [error_train_temp, error_val_temp] = ...
%     learningCurve(X, y, ...
%                   Xval, yval, ...
%                   lambda);
%
%     error_train(j) = error_train_temp(end);
%     error_val(j) = error_val_temp(end);
% end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% %%%%% WORKING: OPTIMISED (Only 1 for loop) %%%%%%%%%%%
for j = 1:length(lambda_vec)
lambda = lambda_vec(j);

theta = trainLinearReg(X, y, lambda);
error_train(j) = linearRegCostFunction(X, y, theta, 0); % lambda = 0;
error_val(j)   = linearRegCostFunction(Xval, yval, theta, 0); % lambda = 0
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% =========================================================================

end```

I tried to provide optimized solutions like vectorized implementation for each assignment. If you think that more optimization can be done, then put suggest the corrections / improvements.

--------------------------------------------------------------------------------
&
Click here to see more codes for Raspberry Pi 3 and similar Family.
&
Click here to see more codes for NodeMCU ESP8266 and similar Family.
&
Click here to see more codes for Arduino Mega (ATMega 2560) and similar Family.

Feel free to ask doubts in the comment section. I will try my best to solve it.
If you find this helpful by any mean like, comment and share the post.
This is the simplest way to encourage me to keep doing such work.

Thanks and Regards,
-Akshay P. Daga

1. Thankyou for your solutions :) I have 2 questions :
1) I see that the sizes of test set and validation set are 21X1 each while that of training set is only 12X1, why is the training set's size smaller that the test and validation set?

2) Why do we put lamda =0 while finding the error_train and error_val in both the functions learningCurve.m and validationCurve.m ??

2. These codes are not working for me. they're running but not giving any marks.

1. Is there any other code (from other website) which is working for you?
If that's the case, Please let me know I will recheck my codes.

Otherwise, You must be doing small mistake from your end.

Either way please let me know.

2. Your code has been working for me. its not just good to copy and paste only. its good to understand the code. Thanks for your help.

3. My code is working and show correct result but while submit the code grader not give marks.

1. It's difficult for me to tell you what's wrong with your code without checking it.
Sorry.

4. 5. In learningCurve.m when i write the same code as given above. it is showing me this division by zero.944305e-31
warning: called from
fmincg at line 102 column 12
can someone help me to fix it

6. Hi Akshay,
In the polyFeatures.m file, when the X is raised to the power of 1,2,3,4 and so on, the X value is divided by thousand before the power calculation. why is that.
-15.9368
-29.1530
36.1895
37.4922
-48.0588
-8.9415
15.3078
-34.7063
1.3892
-44.3838
7.0135
22.7627
The above dataset is divided by 1000 in the second iteration.

7. Hello, thank you for your effort, please I have a question regarding your solution for the linearRegCostFunction. I didn t understand when we need to add the Sum function and when we re not suppose to add it. could you please explain that. Thank you in advance

1. If you look at cost function equations, you have to calculate (elementwise) square of the difference then summation of that. In that case you have to use "sum" function.

In general, if you do matrix multiplication then it already consist "sum of the product" so separate "sum" function is not required.
But, if you do element wise multiplication of square operation on matrices (indicated by .* or .^ respectively) then you have to do "sum" operation separately for the summation purpose.

2. Ah I get it, thank you so much for this explination

8. Hi - For validation curve, dont think you need the 1:m loop....the way it is implemented , only the last iteration of the loop matters. Below code is working with only one loop:

len = length(lambda_vec) ;

for i = 1:len

lambda = lambda_vec(i) ;
[theta] = trainLinearReg(X, y,lambda );
error_train(i) = linearRegCostFunction(X,y,theta,0) ;

error_val(i) = linearRegCostFunction(Xval,yval,theta,0) ;

end

9. I am not sure if you can help but i cant find where is the problem.

I am getting those answers for learning curves and my learning curve is identical to the one in ex5. It shows 0 points tho.

Can someone help me.

# Training Examples Train Error Cross Validation Error
1 0.000000 205.121096
2 0.000000 110.300366
3 3.286595 45.010231
4 2.842678 48.368911
5 13.154049 35.865165
6 19.443963 33.829962
7 20.098522 31.970986
8 18.172859 30.862446
9 22.609405 31.135998
10 23.261462 28.936207
11 24.317250 29.551432
12 22.373906 29.433818

10. 1. and why remove horizontal bias unit from X? X(: 2:end)

2. As per the theory taught in lecture, We don't apply regularization on the first term, regularization is only applied from 2nd to end term. that's why we break the grads into 2 parts Grad(1) and Grad(2:m) do the processing separately and then and combine them.

3. thanks man. and thank you for sharing your work. really helps people who are stuck.

4. Here is what i did by the way. Didn't break the grads into 2 parts but still works. Do you really need to break it into two parts? Am i applying regularization on the first term using this method?

11. Can you explain this part of the for loop?

Xtrain = X(1:i,:);
ytrain = y(1:i);

how are you getting the x and y values in a for loop for the training curve? I know the cross validation x and y values are already given but what's your method of getting values for the training function?

1. Here, we are increasing the size of training set in each iteration from 1 upto m and plotting the graph of train and test error.

Eg.
you will compute the train and test errors for
% dataset sizes from 1 up to m. In practice, when working with larger
% datasets, you might want to do this in larger intervals.

12. hi.. can you please help.. In validation curve.. why have you put lambda = 0 for error_train(j) and error_val(j). why are we not using different values of lamba here
are different values of lambda only needed for theta?

1. In validation curve, we are calculating error_train(j) and error_val(j) without regularization. So, to remove the regularization term we have set lambda = 0.

2. Thankyou man!!. Your solutions are a saviour