▸ Recommender Systems :
- Suppose you run a bookstore, and have ratings (1 to 5 stars) of books. Your collaborative filtering algorithm has learned a parameter vector for user j, and a feature vector for each book. You would like to compute the “training error”, meaning the average squared error of your system’s predictions on all the ratings that you have gotten from your users. Which of these are correct ways of doing so (check all that apply)?For this problem, let m be the total number of ratings you have gotten from your users. (Another way of saying this is that ).
[Hint: Two of the four options below are correct.]
- In which of the following situations will a collaborative filtering system be the most appropriate learning algorithm (compared to linear or logistic regression)?
- You manage an online bookstore and you have the book ratings from many users. You want to learn to predict the expected sales volume (number of books sold) as a function of the average rating of a book.
- You’re an artist and hand-paint portraits for your clients. Each client gets a different portrait (of themselves) and gives you 1-5 star rating feedback, and each client purchases at most 1 portrait. You’d like to predict what rating your next customer will give you.
- You run an online bookstore and collect the ratings of many users. You want to use this to identify what books are “similar” to each other (i.e., if one user likes a certain book, what are other books that she might also like?)
- You own a clothing store that sells many styles and brands of jeans. You have collected reviews of the different styles and brands from frequent shoppers, and you want to use these reviews to offer those shoppers discounts on the jeans you think they are most likely to purchase.
- You've written a piece of software that has downloaded news articles from many news websites. In your system, you also keep track of which articles you personally like vs. dislike, and the system also stores away features of these articles (e.g., word counts, name of author). Using this information, you want to build a system to try to find additional new articles that you personally will like.
- You run an online news aggregator, and for every user, you know some subset of articles that the user likes and some different subset that the user dislikes. You'd want to use this to find other articles that the user likes.
- You manage an online bookstore and you have the book ratings from many users. For each user, you want to recommend other books she will enjoy, based on her own ratings and the ratings of other users.
- You run a movie empire, and want to build a movie recommendation system based on collaborative filtering. There were three popular review websites (which we’ll call A, B and C) which users to go to rate movies, and you have just acquired all three companies that run these websites. You’d like to merge the three companies’ datasets together to build a single/unified system. On website A, users rank a movie as having 1 through 5 stars. On website B, users rank on a scale of 1 - 10, and decimal values (e.g., 7.5) are allowed. On website C, the ratings are from 1 to 100. You also have enough information to identify users/movies on one website with users/movies on a different website. Which of the following statements is true?
- You can merge the three datasets into one, but you should first normalize each dataset’s ratings (say rescale each dataset’s ratings to a 0-1 range).
- You can combine all three training sets into one as long as your perform mean normalization and feature scaling after you merge the data.
- Assuming that there is at least one movie/user in one database that doesn’t also appear in a second database, there is no sound way to merge the datasets, because of the missing data.
- It is not possible to combine these websites’ data. You must build three separate recommendation systems.
- You can merge the three datasets into one, but you should first normalize each dataset separately by subtracting the mean and then dividing by (max - min) where the max and min (5-1) or (10-1) or (100-1) for the three websites respectively.
- Which of the following are true of collaborative filtering systems? Check all that apply.
- For collaborative filtering, it is possible to use one of the advanced optimization algoirthms (L-BFGS/conjugate gradient/etc.) to solve for both the 's and 's simultaneously.
- Suppose you are writing a recommender system to predict a user’s book preferences. In order to build such a system, you need that user to rate all the other books in your training set.
- Even if each user has rated only a small fraction of all of your products (so r(i, j) = 0 for the vast majority of (i, j) pairs), you can still build a recommender system by using collaborative filtering.
- For collaborative filtering, the optimization algorithm you should use is gradient. In particular, you cannot use more advanced optimization algorithms (LBFGS/ conjugate gradient/etc.) for collaborative filtering, since you have to solve for both the x(i) 's and θ(j)'s simultaneously.
- To use collaborative filtering, you need to manually design a feature vector for every item (e.g., movie) in your dataset, that describes that item's most important properties.
- Recall that the cost function for the content-based recommendation system is . Suppose there is only one user and he has rated every movie in the training set. This implies that n_u = 1nu=1 and r(i,j) = 1r(i,j)=1 for every i,ji,j. In this case, the cost function J(\theta)J(θ) is equivalent to the one used for regularized linear regression.
- When using gradient descent to train a collaborative filtering system, it is okay to initialize all the parameters (x^{(i)}x(i) and \theta^{(j)}θ(j)) to zero.
- If you have a dataset of users ratings' on some products, you can use these to predict one user's preferences on products he has not rated.
Check-out our free tutorials on IOT (Internet of Things):
- Suppose you have two matrices A and B, where is 5x3 and is 3x5. Their product is C = AB, a 5x5 matrix. Furthermore, you have a 5x5 matrix R where every entry is 0 or 1. You want to find the sum of all elements C(i, j) for which the corresponding R(i, j) is 1, and ignore all elements C(i, j) where R(i, j)=0. One way to do so is the following code:
Which of the following pieces of Octave code will also correctly compute this total?
Check all that apply. Assume all options are in code.- total = sum(sum((A * B) .* R))
- C = (A * B) .* R; total = sum(C(:));
- total = sum(sum((A * B) * R));
- C = (A * B) * R; total = sum(C(:));
- C = A * B; total = sum(sum(C(R == 1)));
- total = sum(sum(A(R == 1) * B(R == 1));
Click here to see solutions for all Machine Learning Coursera Assignments.
&
Click here to see more codes for Raspberry Pi 3 and similar Family.
&
Click here to see more codes for NodeMCU ESP8266 and similar Family.
&
Click here to see more codes for Arduino Mega (ATMega 2560) and similar Family.
Feel free to ask doubts in the comment section. I will try my best to answer it.
If you find this helpful by any mean like, comment and share the post.
This is the simplest way to encourage me to keep doing such work.
&
Click here to see more codes for Raspberry Pi 3 and similar Family.
&
Click here to see more codes for NodeMCU ESP8266 and similar Family.
&
Click here to see more codes for Arduino Mega (ATMega 2560) and similar Family.
Feel free to ask doubts in the comment section. I will try my best to answer it.
If you find this helpful by any mean like, comment and share the post.
This is the simplest way to encourage me to keep doing such work.
Thanks & Regards,
- APDaga DumpBox
- APDaga DumpBox
options on Q1 and Q4 and Q5 need to revised
ReplyDeleteCan you please explain what you find wrong in existing questions and answers??
DeleteI too got the theta Transpose choices. Which isnt in your Q1 list.
Delete@Rums Please post the question and choices you got in your exam. I will update it here. It will be a great help for others as well.
DeleteHi, the 'unknown 'has posted the choices below on sep 2020 post. The choices have theta Transpose x-y
DeleteQ1:
ReplyDelete1 point
\frac{1}{m} \sum_{(i,j):r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)} )^2m1∑(i,j):r(i,j)=1((θ(j))Tx(i)−y(i,j))2
\frac{1}{m} \sum_{j=1}^{n_u} \sum_{i:r(i,j)=1} (\sum_{k=1}^n (\theta^{(k)})_j x^{(k)}_i - y^{(i,j)} )^2m1∑j=1nu∑i:r(i,j)=1(∑k=1n(θ(k))jxi(k)−y(i,j))2
\frac{1}{m} \sum_{(i,j):r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - r(i,j) )^2m1∑(i,j):r(i,j)=1((θ(j))Tx(i)−r(i,j))2
\frac{1}{m} \sum_{i=1}^{n_m} \sum_{j:r(i,j)=1} ( \sum_{k=1}^n (\theta^{(j)})_k x^{(i)}_k - y^{(i,j)} )^2m1∑i=1nm∑j:r(i,j)=1(∑k=1n(θ(j))kxk(i)−y(i,j))2
I could email you doc file with the another correct options. Posting them here does not look well.
ReplyDeleteWas sent yesterday.
DeleteThank you very much for your help.
DeleteI have updated/added the options for Q2 & Q4.
But Q1 in above post and in your doc is same. I couldn't find any difference.
If there is any, please tell me.
Hie would you help me with Java programming : build a recommendation system quiz answers please.
ReplyDelete[email protected]