ABSTRACT
Online display advertising has becomes a billion-dollar industry, and it keeps growing. Advertisers attempt to send marketing messages to attract potential customers via graphic banner ads on publishers’ web pages. Advertisers are charged for each view of a page that delivers their display ads. However, recent studies have discovered that more than half of the ads are never shown on users’ screens due to insufficient scrolling. Thus, advertisers waste a great amount of money on these ads that do not bring any return on investment. Given this situation, the Interactive Advertising Bureau calls for a shift toward charging by viewable impression, i.e., charge for ads that are viewed by users. With this new pricing model, it is helpful to predict the viewability of an ad. This paper proposes two probabilistic latent class models (PLC) that predict the viewability of any given scroll depth for a user-page pair. Using a real-life dataset from a large publisher, the experiments demonstrate that our models outperform comparison systems.
EXISTING SYSTEM:
Existing work collects scrolling behavior and uses it as an implicit indicator of user interests to measure webpage quality. In contrast, we design algorithms to predict the scrolling behavior for any user-webpage pair. Several studies have attempted to predict user browsing behavior, including click and dwell time .The existing methods on click prediction are not applicable in our application.
They rely heavily on side information (e.g., user profile, and users’ queries and tweets) in order to detect what the user is looking for and thereby suggest the items that are more likely to be clicked on. In our application, on the other hand, there is no such kind of explicit indicators of user information needs and detailed user profile.
SVD(Singular Value Decomposition) model can be trained with data consisting of users, pages, and whether a certain scroll depth is in-view in individual page views, and then be used to predict the viewability for that specific scroll depth. But one SVD has to be trained for each possible scroll depth. Another option is to train an SVD model with data consisting of users, pages, and the maximum page depth that a user scrolls to on a page.
The predicted maximum page depth can help give a binary decision for any given scroll depth (i.e., in-view or not), but it cannot give a probabilistic value for a scroll depth to be in-view
DISADVANTAGE:
It may be costly to build one SVD model for every single depth.
There is no existing research attempt to predict the maximum scroll depth of a user/page pair and to predict ad viewability.
PROPOSED SYSTEM:
Web pages can be utilized to improve the performance of max scroll depth prediction models. The users who prefer to scroll far down on most web pages would have a higher probability to scroll down the current page. In this project we are using two proposed PLC models to perform the substantially better than the other models within this challenging interval.
1. PREDICTION MODEL WITH CONSTANT MEMBERSHIPS
Our task is to infer the max scroll depth of a page view, xua, where u is the user and a is the webpage. It is intuitive that the characteristics of individual users and webpages can be utilized to improve the performance of max scroll depth prediction models. For example, users who prefer to scroll far down on most webpages would have a higher probability to scroll down the current page. Also,features such as device type and geo-location are easy to be modeled.
2. PREDICTION MODEL WITH DYNAMIC MEMBERSHIPS
By computing offline the memberships of users and webpages belonging to latent user and webpage classes, PLC const predicts the viewability of any target scroll depth in a page view. However, user and webpage memberships in reality can be dynamic during the online process, since user interests and page popularity keep changing. To capture the dynamic nature of the memberships, we propose to represent the memberships by a function whose output value is determined in real-time. Meanwhile, the feature vectors should also be able to reflect the change of user, webpage, and context.
For both the models we are predicting the following attributes,
1) The mean max scroll depth of all page views of user.
2) IP address of the user’s device.
3) URL of the webpage.
4) User’s Geolocation.
5)User’s GMT Time.
3. Reranking Algorithm:
Reranking algorithm used to Rank the advertisements which is frequently viewed by the User.
ADVANTAGES:
Both PLC models have substantially better prediction performance than the comparative systems.
The PLC with dynamic memberships can better adapt to the shift of user interests and webpage attractiveness and has less memory consumption.
ALGORITHM:
• PLC-Model
• Reranking algorithm.
SYSTEM ARCHITECTURE:
SYSTEM CONFIGURATIONHARDWARE CONFIGURATION
System : Pentium IV 2.4 GHz.
Hard Disk : 40 GB.
Monitor : 15 VGA Colour.
Mouse : Logitech.
Ram : 1 GB.
SOFTWARE CONFIGURATION
Operating system : Windows XP/7/8.
Coding Language : JAVA/J2EE
IDE : Eclipse
Database : MYSQL
REFERENCES
I. Lunden, “Internet ad spend to reach $121b in 2014,” http://techcrunch.com/2014/04/07/internet-ad-spend-to-reach- 121b-in-2014-23-of-537b-total-ad-spend-ad-tech-gives-display-aboost-over-search/.
For more details about Projects : https://beprojectcentre.in/ieee-java-projects/
CONTACT US
1Croreprojects Center,
Door No : 68 & 70, No : 172, Ground Floor,
Rahaat Plaza ( Opp. of Vijaya Hospital ),
Vadapalani. Chennai-600026.
Call / WhatsApp: 9751800789 / 7708150152
Email ID: 1croreprojects@gmail.com