Apache Spark

Apache Spark™ is a fast and general engine for large-scale data processing.Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.Write applications quickly in Java, Scala, Python, R.Combine SQL, streaming, and complex analytics.Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3.

Building a Recommendation Engine

import org.apache.spark.mllib.recommendation.ALS
import org.apache.spark.mllib.recommendation.Rating
import org.apache.spark.SparkConf
//Import other necessary packages
object Movie {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Movie").setMaster("local[2]")
val sc = new SparkContext(conf)
val rawData = sc.textFile("/home/mouli/Downloads/ml-100k/")
val rawRatings ="\t").take(3))
val ratings = { case Array(user, movie, rating) => Rating(user.toInt, movie.toInt, rating.toDouble) }
//Training the data
//rank = 50, iteration = 10, lambda = 0.0.1
val model = ALS.train(ratings, 50, 5, 0.01)
//test the model
val predictedRating = model.predict(789, 123)
//generate recommendations
val userId = 789
val K = 10
val topKRecs = model.recommendProducts(userId, K)
val movies = sc.textFile("/home/mouli/Downloads/ml-100k/u.item")
val titles = => line.split("\\|").take(2)).map(array => (array(0).toInt,array(1))).collectAsMap()
val moviesForUser = ratings.keyBy(_.user).lookup(789)
moviesForUser.sortBy(-_.rating).take(10).map(rating => (titles(rating.product), rating.rating)).foreach(println)

Copyright © 2018 Bigdatamatica Solutions Private Limited