Tuesday, 28 February 2017

A Brief History of Databases

A Brief History of Database Systems
Data are raw facts that constitute building blocks of information. Database Management System (DBMS) is a set of software that is used to define, store, manipulate and control the data in a database.
Ancient History:
 Data are not stored on disk; programmer defines both logical data structure and physical structure, such as storage structure, access methods, I/O modes etc.
1968 File-Based:
predecessor of database, Data maintained in a flat file.  Processing characteristics determined by common use of magnetic tape medium.
1.     Data are stored in files with interface between programs and files. Mapping happens between logical files and physical file, one file corresponds to one or several programs
2.     Various access methods exits, e.g., sequential, indexed, random
3.     Requires extensive programming in third-generation language such as COBOL, BASIC.
1968-1980 Era of non-relational database:
A database provides integrated and structured collection of stored operational data which can be used or shared by application systems.  Prominent hierarchical database model was IBMs  first DBMS called IMS. Prominent network database model was CODASYL DBTG model; IDMS was the most popular network DBMS.
Hierarchical data model

1.     Mid 1960s Rockwell partner with IBM to create information Management System (IMS), IMS DB/DC lead the mainframe database market in 70s and early 80s.
2.     Based on binary  trees. Logically represented by an upside down tree, one-to many relationship between parent and child records.
3.     Efficient searching; Less redundant data; Data independence; Database security and integrity
Network data model
1.     Early 1960s, Charles Bachmann developed first DBMS at Honeywell, Integrated Data Store ( IDS)
2.     It standardized in 1971 by the CODASYL group (Conference on Data Systems Languages)
3.     Directed acyclic graph with nodes and edges
4.     Identified 3 database component: Network schema database organization; Subschema view s of database per user; Data management language -- at low level and procedural
1970-present Era of relational database and Database Management System (DBMS):
Based on relational calculus, shared collection of logically related data and a description of this data, designed to meet the information needs of an organization; System catalog/metadata provides description of data to enable program-data independence; logically related data comprises entities, attributes, and relationships of an organizations information. Data abstraction allows view level, a way of presenting data to a group of users and logical level, how data is understood to be when writing queries.
1970: Ted Codd at IBMs San Jose Lab proposed relational models.
Peter Chen defined the Entity-relationship(ER) model
Maturation of the relational database technology, more relational based DBMS were developed and SQL standard adopted by ISO and ANSI.

Object-oriented DBMS (OODBMS) develops.  Little success commercially because advantages did not justify the cost of converting billions of bytes of data to new format.
 incorporation of object-orientation in relational DBMSs, new application areas, such as data warehousing and OLAP, web and Internet, Interest in text and multimedia, enterprise resource planning (ERP) and management resource planning (MRP)
Microsoft ships access, a personal DBMS created as element of Windows gradually supplanted all other personal DBMS products.
1995: First Internet database applications
XML applied to database processing, which solves long-standing database problems.  Major vendors begin to integrate XML into DBMS products.
  The main players:
1.     Microsoft Corp- SQL Server
2.     Oracle- Oracle 9i
3.     IBM IMS/DB, DB2

A database is an organized collection of data. It is the collection of schemas, tables, queries, reports, views, and other objects. The data are typically organized to model aspects of reality in a way that supports processes requiring information, such as modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies.

A database management system (DBMS) is a computer software application that interacts with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, MongoDB, MariaDB, Microsoft SQL Server, Oracle, Sybase, SAP HANA, MemSQL and IBM DB2. A database is not generally portable across different DBMSs, but different DBMS can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than one DBMS. Database management systems are often classified according to the database model that they support; the most popular database systems since the 1980s have all supported the relational model as represented by the SQL language. Sometimes a DBMS is loosely referred to as a 'database'.
Terminology and overview
Formally, a "database" refers to a set of related data and the way it is organized. Access to this data is usually provided by a "database management system" (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.

Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.

Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index). This article is concerned only with databases where the size and usage requirements necessitate use of a database management system.

Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:

Data definition – Creation, modification and removal of definitions that define the organization of the data.
Update – Insertion, modification, and deletion of the actual data.
Retrieval – Providing information in a form directly usable or for further processing by other applications. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.

Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. RAID is used for recovery of data if any of the disks fail. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions from databases before the inception of Structured Query Language (SQL). The data recovered was disparate, redundant and disorderly, since there was no proper method to fetch it and arrange it in a concrete structure.

Since DBMSs comprise a significant economical market, computer and storage vendors often take into account DBMS requirements in their own development plans.

Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
General-purpose and special-purpose DBMSs
A DBMS has evolved into a complex software system and its development typically requires thousands of human years of development effort.[a] Some general-purpose DBMSs such as Adabas, Oracle and DB2 have been undergoing upgrades since the 1970s. General-purpose DBMSs aim to meet the needs of as many applications as possible, which adds to the complexity. However, the fact that their development cost can be spread over a large number of users means that they are often the most cost-effective approach. However, a general-purpose DBMS is not always the optimal solution: in some cases a general-purpose DBMS may introduce unnecessary overhead. Therefore, there are many examples of systems that use special-purpose databases. A common example is an email system that performs many of the functions of a general-purpose DBMS such as the insertion and deletion of messages composed of various items of data or associating messages with a particular email address; but these functions are limited to what is required to handle email and don't provide the user with all of the functionality that would be available using a general-purpose DBMS.

Many other databases have application software that accesses the database on behalf of end-users, without exposing the DBMS interface directly. Application programmers may use a wire protocol directly, or more likely through an application programming interface. Database designers and database administrators interact with the DBMS through dedicated interfaces to build and maintain the applications' databases, and thus need some more knowledge and understanding about how DBMSs operate and the DBMSs' external interfaces and tuning parameters.

Following the technology progress in the areas of processors, computer memory, computer storage, and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. The development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.

The two main early navigational data models were the hierarchical model, epitomized by IBM's IMS system, and the CODASYL model (network model), implemented in a number of products such as IDMS.

The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2015 they remain dominant : IBM DB2, Oracle, MySQL, and Microsoft SQL Server are the top DBMS.[10] The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.[citation needed]

Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object-relational databases.

The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key-value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.

1960s, navigational DBMS
Further information: Navigational database

Basic structure of navigational CODASYL database model
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites[11] a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.

As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the "CODASYL approach", and soon a number of commercial products based on this approach entered the market.

The CODASYL approach relied on the "manual" navigation of a linked data set which was formed into a large network. Applications could find records by one of three methods:

Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a very straightforward query language. However, in the final tally, CODASYL was very complex and required significant training and effort to produce useful applications.

IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award presentation was The Programmer as Navigator. IMS is classified[by whom?] as a hierarchical database. IDMS and Cincom Systems' TOTAL database are classified as network databases. IMS remains in use as of 2014.

1970s, relational DBMS
Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.

In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to use a "table" of fixed-length records, with each table used for a different type of entity. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables (or relations), with optional elements being moved out of the main table to where they would take up room only if needed. Data may be freely inserted, deleted and edited in these tables, with the DBMS doing whatever maintenance needed to present a table view to the application/user.

In the relational model, records are "linked" using virtual keys not stored in the database but defined as needed between the data contained in the records.
The relational model also allowed the content of the database to evolve without constant rewriting of links and pointers. The relational part comes from entities referencing other entities in what is known as one-to-many relationship, like a traditional hierarchical model, and many-to-many relationship, like a navigational (network) model. Thus, a relational model can express both hierarchical and navigational models, as well as its native tabular model, allowing for pure or combined modeling in terms of these three models, as the application requires.

For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.

Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This simple "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for.

Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.

Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.

IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.

In 1970, the University of Michigan began development of the MICRO Information Management System[14] based on D.L. Childs' Set-Theoretic Data model.  MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.

Integrated approach[edit]
Main article: Database machine
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.

Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).

Late 1970s, SQL DBMS[edit]
IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL  had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).

Larry Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when the first version was released in 1978.

Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).

In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMSs.

Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two have become irrelevant.

The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff the creator of dBASE stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.

The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields.[20] The term "object-relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object-relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object-relational mappings (ORMs) attempt to solve the same problem.

XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in enterprise database management, where XML is being used as the machine-to-machine data interoperability standard. XML database management systems include commercial software Mark Logic and Oracle Berkeley DB XML, and a free use software Clusterpoint Distributed XML/JSON Database. All are enterprise software database platforms and support industry standard ACID-compliant transaction processing with strong database consistency characteristics and high level of database security.

NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally. The most popular NoSQL systems include MongoDB, Couchbase, Riak, Memcached, Redis, CouchDB, Hazelcast, Apache Cassandra, and HBase, which are all open-source software products.

In recent years, there was a high demand for massively distributed databases with high partition tolerance but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.

NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system. Such databases include ScaleBase, Clustrix, EnterpriseDB, MemSQL, NuoDB, and VoltDB.

Student Life vs Online Earner Life

Student Life vs Online Earner Life
Student: Life is Careless
Online Earner : Every minute is valuable
Student: Purchasing iPhone is a dream
Online Earner : What to buy 6S or 7???
Student: Failed to get job of 3 lacks per month
Online Earner : Cleared target of 40k-70k in a month
Student: Facebook Friends List 100
Online Earner : Daily Friend Requests and Messages 100+
Student: Learn about technology
Online Earner : Play with technology along with learning

BISE Multan 10th Class Physics Guess Papers

#Admin's #Guess #Series****

#GUESS #PHYSICS #Class #10th****

#2nd #March 2017 #THURSDAY****

#1st #Annual 2017 #Exams****



1#Define Electric Current?
2#State and Explain Columb's Law?
3#Define Ampere?
4#Define Potential and Electric Potential?
5#Define Electric Field and Electric Field Intensity?
6#Prove the relation for Parallel and Series Combination of Capacitors?
7#What is Electric Field Lines?
8#Define Capacitors and its Unit?
9#Write Name of Types of Capacitors?
10#What is Mica Capacitors?****

Simple Harmonic Motions with Mass attached**Simple Pendulum**Ripple Tank**Characteristics of Sound Intensity** Refraction and Reflection Law,Snell's Law,Total Internal Reflections,Compound Microscope, Telescope, Disorders of Eye,Coulomb's Law,Electric Field and Intensity, Capacitor Series and Parallel, DC Motor,OR AND Gates,Transformer,CRO,Half Life****


Ch#10 (1,2,4,5,6,9,10).


Ch#12 (1,3,4,5,8,9,10,12)

Ch#13 (1,2,5,6,7,8,10).




#With #Best #Wishes****

Saleem Iqbal Naz

#EST* GOVT.Bukhari Public High School #Multan****

Monday, 27 February 2017

All BISE Boards 10th class Assessment schemes

All BISE Boards 10th class Assessment schemes
assessment scheme for 9th class 2017,
10th class assessment scheme 2017,
assessment scheme for 10th class 2017 gujranwala board,
assessment scheme for 10th class 2017 chemistry,
scheme of 10th class 2017,
assessment scheme for 10th class 2017 multan board,
paper scheme of 10th class 2017,

Allama Iqbal Open University Admissions Open Spring Semester 2017

Welcome to Allama Iqbal Open University
Admissions Open from 01 February 2017 Spring Semester 2017 of various Programmes from SSC (Matric) to PhD. 
Last Date: 6 March 2017
For details of AIOU Programmes visit following link. All information is given in it. 
1- Buy a prospectus from any Regional office of AIOU
or for online Admission form visit following link:
2- Fill your form as per given instructions in the prospectus.
3- Submit your form in nearest bank. (Banks names are given in the prospectus)
For Merit based Programmes, Student have to send their applications by post as per given instructions in the prospectus.
4- Get receipt of form from bank officer.
For further details you can also contact us on AIOU Helpline:
051 111-112-468

Educators Final Merit list District Sialkot

Educators Final Merit list District Sialkot Has displayed

#District #Sialkot

#Final lists of #SSE
Best of luck for future

#Final #Merit #Lists of #SSEs(BS-16)(MALE/FEMALE)(All Catagories) have been #uploaded on the #Following #Link****

Facebook Allow Content Owners To Earn From Their Live Content By Monetizing Live Videos on Facebook Pages

Facebook Allow Content Owners To Earn From Their Live Content By Monetizing Live Videos on Facebook Pages
On Thursday, the company announced that it will let publishers show ad breaks in the middle of their prerecorded videos for the first time.
More Facebook pages will also be able to show ad breaks during their live broadcasts, a feature that had been previously limited to a small group of handpicked publishers, including Business Insider.
These ad breaks are essentially mini-commercials that may run after a video has played for 20 seconds and must be at least two minutes apart. Facebook has said it won't show pre-roll ads before a video plays like YouTube does, and these ad breaks are the social network's first real effort to monetize video to date.
Facebook is letting publishers keep 55% of the money generated from these new ad breaks, and it's currently working with a "handful" of US publishers to test them in non-live videos.
"Whether on Facebook or off, we're committed to continuing to work with our partners to develop new monetization products and ad formats for digital video," Facebook VP of Partnerships, Nick Grudin, said in a statement. "It’s early days, but today’s updates are a step towards this goal.”
Monetizing video has become an increasingly important part of Facebook's strategy to capture lucrative brand ad dollars from the TV industry. CEO Mark Zuckerberg has said that video is a "mega-trend" for Facebook akin to mobile phones. The company is currently looking into funding its own original shows and is set to release a standalone app for TVs in the coming weeks.
YouTube is attractive to high quality video content producers because of monetization options, something Facebook does not yet have. However, this is changing now, and Facebook has taken has been updated with in-stream video ads for all publishers.
More publishers who create Live Videos now have the ad-breaks option, which was being trialed with a limited number of partners. Facebook has also began the testing of ad-breaks for on demand videos with a small number of partners.
Audience Network will now start delivering in-stream video ads for all eligible partners. Audience Network is Facebook’s ad delivery service that works on third party applications and web sites as well. The system delivers advertising relevant and targeted to the viewer. So far, Facebook has been piloting the delivery of in-stream ads with a select few partners. Facebook will start showing advertisements in the middle of videos for all content partners with sufficient inventory. Users on both mobile and desktop will be served with the ads.
Ad Breaks is now being made available to more publishers. Publishers in the United States with over 2,000 followers, and those who have reached over 300 concurrent viewers will get the Ad Break option in the livestream. Those who are going Live, can chose to take a break of up to twenty seconds in the middle of the Live. The impressions are instantly monetised, and the broadcaster gets a share of the resulting ad revenue.
Publishers can take the first Ad Break four minutes after going live. After that, one Ad Break is permitted for every five minutes. A $ button in a blue circle appears on the interface, if it is possible for the publisher to take an Ad Break. The feature is only available in the United States as of now, but Facebook plans to roll out Ad Breaks for publishers around the world in the future. The feature is still being tested, and eligible publishers will be automatically notified the next time they go Live.
The last update is a limited test of inserting Ad Breaks in on demand video. A few select publishers can insert short Ad Breaks in the videos they upload, or even insert ad breaks into videos that have already been uploaded. The new feature is in its early stages, and will be analysed, tweaked and improved based on the tests. Facebook plans to make available Ad Breaks for on demand videos to more partners in the future.
An Update on Video Monetization
We want to help our partners monetize their premium video content, both on Facebook and on their own websites and apps. Today we are sharing three updates about video monetization on Facebook and through Audience Network:

All eligible publishers can now make money from in-stream video ads on their own websites and apps through Audience Network.
On Facebook, we’re expanding our beta test of Ad Breaks in Facebook Live to additional profiles and Pages in the U.S.
We have started testing Ad Breaks in on-demand video on Facebook with a small number of partners.


Audience Network is a service that places ads from Facebook’s advertisers onto third-party websites and apps. In May we announced an Audience Network test of in-stream video ads, and today we’re making in-stream video available to all eligible Audience Network publishers who have available inventory. Now, publishers can bring relevant video ads to people all over the world, on both mobile and desktop.

Publishers have historically been wary of video ads delivered from networks or exchanges because they can load slowly and are often unreliable. With Audience Network, advertisers upload their ads and bids to Facebook in advance—allowing us to quickly run an auction and return an ad that’s a good experience for the person watching it.

During our testing, publishers like Univision and Collective Press saw the benefits of Audience Network in-stream video ads. Univision, the most-visited Spanish-language website among U.S. Hispanics, wanted to complement its direct sales business with video ads from Audience Network. Univision has successfully implemented Audience Network in-stream video across the U.S., Spain, Colombia, Argentina, and Mexico. Since implementation in October, Audience Network U.S. eCPMs have been 52% higher than with other monetization partners.

If you are interested in growing your revenue through Audience Network in-stream video ads, please visit our product page to learn more or download our getting started guide here.


Over the past few months, a small group of video creators has been testing Ad Breaks to make money from their Facebook Live videos. As the name implies, Ad Breaks allow creators to take short breaks for ads during their live videos. When a broadcaster chooses to take an ad break, people watching the video will see an in-stream ad of up to 15 seconds in length. The broadcaster will earn a share of the resulting ad revenue.

Today we are making the feature available to more Live creators. Eligible Pages and profiles will have the option to use ad breaks in any live broadcast reaching 300 or more concurrent viewers.

How to Use Ad Breaks in a Live Video

Pages or profiles in the U.S. can qualify to test ad breaks if they have 2,000 or more followers and have reached 300 or more concurrent viewers in a recent live video.
You can take ad breaks during any live video reaching 300 or more concurrent viewers by tapping on the $ icon in the Live composer window.
You can take your first ad break after having been live for at least 4 minutes. You can take additional ad breaks after a minimum of 5 minutes between each break.
Each ad break lasts up to 20 seconds.
Please note that Pages or profiles with Intellectual Property or Community Standards violations may be disqualified from taking ad breaks. Ad Breaks are currently available only to U.S. broadcasters, but we hope to expand to additional countries in the future.

We welcome all eligible creators to participate in the next round of beta testing. Starting today, and rolling out over the next several days, Pages and profiles who qualify for the test will receive notifications the next time they go live.

Sunday, 26 February 2017

Container technology versus virtual machines – how will the cloud market change?

Where is the cloud market headed? How important will container technology become and how is Intel as a company reacting to changing requirements in the cloud market? We talked about all of this with Arjan Van De Ven, Director Advanced Systems Engineering, Intel Corporation, at Cloud Expo Europe in Frankfurt.
Right now there is a big shift happening in the industry with people moving from big virtual machines to smaller and decomposed container technology. Van De Ven ties this change to the need for more flexible and agile architecture and the changing management of workloads. Watch the full video interview to learn more about Intel’s projects on container security and the close partnership with 1&1 (just click the Here)

How Does A Storage Business Grow In The Days Of The Cloud?

I do not know the average age of people who read these articles. I suspect, just based on the published profiles of individual investors these days, that most readers are of a somewhat mature age, and certainly this writer falls into that bucket. Experience is probably a good thing when it comes to investing. It is not such a good thing when it comes to being a storage vendor in the age of the cloud. NetApp (NASDAQ:NTAP) has reached a certain level of maturity having been through some near-death experiences and enjoying years of fantastic growth. The company, having been crippled by both the advent of the cloud and a set of very bad choices in terms of technology and strategy, has been playing catch-up now for a couple of years. It reported the results of its fiscal Q3 last week and the turnaround has reached the 1st plateau.
To a certain extent, investors had expected an upside (the shares appreciated into the numbers), and they got what they were looking for, but not much more. The shares appreciated about 4% - they might have risen by the same amount without an earnings release.
Overall, the company has stopped shrinking, and it continues to make significant progress in restoring its business model. And yet… while the shares have shown strong appreciation over the past year, climbing by about 75%, they have climbed a wall of worry and doubt with many analysts more or less incredulous that the company has executed something of a turnaround. Part of this article will address the concerns that have kept many analysts from expressing enthusiasm about the company's progress and outlook.
The company has seen a few upgrades so far this year. At the present time, of the 34 analysts who rate the name, the consensus is at hold with only 7 buys and 2 strong buys, more or less set off by 7 sell recommendations. Not surprisingly, the average price target for the shares is just a couple of percent greater than current quotations. While the shares have appreciated significantly, they have done so without the approbation of most analysts.
Headlines from a generally successful quarter
Just to review some of the salient headlines from the earnings release, revenues reached $1.4 billion, up slightly from the prior year. Product revenues, perhaps more important in terms of looking at the company's business progress, were up almost 5% year over year. The decline seen in hardware maintenance is likely to reverse in coming quarters on the heels of the growth in product revenues.
Earnings for the period were unchanged on a GAAP basis. The company continued to achieve a strong level of expense control with operating expenses declining by 8% year on year and actually declining sequentially, which is far better than normal seasonality. In a GAAP presentation, this operational improvement was offset by a restructuring charge and significant tax accruals.
In addition, product gross margins declined noticeably year on year. Part of that decline relates to promotions in order to combat competition in the space. To a greater or lesser extent, the decline in product gross margins was just about offset by a sharp improvement in the cost of hardware maintenance and other services. Management mentioned during the call that it might be able to end some of the promotions that have weighed on gross margins in the fiscal year that starts on May 1st. It also suggested that it would be better able to optimize the supply chain as it introduces newer products over the course of the next several quarter.
There are observers who are quite concerned by the current trajectory regarding product gross margins; I think that based on the available evidence that concern is likely overblown, but it has been and will remain a factor in analyst forecasts - until it is remediated as I believe is likely to happen.
Stock-based comp declined by more than 25% in the period and represented 16% of reported non-GAAP profit. Overall, reported non-GAAP EPS reached $.82, up by almost 20% compared to the year earlier quarter and a beat of more than 10% compared to prior expectations.
Cash flow from operations did show a decline primarily due to increases in receivables, declines in deferred revenues and a significant swing in other assets and liabilities, particularly including inventories. But for the 9 months that ended at the end of January, CFFO has been flat and free cash flow contracted marginally. The company has continued to repurchase shares and to pay a modest dividend with a current yield of about 1.9%. The company has a 3-year history of increasing dividends in July. I think it's probable that it will continue that pattern given the overall performance of the company.
The company's guidance has led to a modest uptick in consensus EPS estimates for the current quarter, which ends the fiscal year and a somewhat more significant increase in EPS estimates for fiscal year '18 which ends 4/30/18. The CFO intimated during the course of the conference call that operating margins would see further improvements in the coming fiscal year, and it seems likely that he will raise margin targets during April's scheduled analyst meeting such that estimates will see a further upward revision.
The company also increased its revenue projection for the current quarter to a level that will probably produce product revenue growth in the high-single digits. That guidance is now reflected in the consensus forecast, but the consensus revenue growth being forecast for NetApp is just 2% for fiscal '18, part of the FUD factor that still envelops perceptions regarding the outlook for the company. I expect that the analyst meeting, to be held on April 5th may lead the consensus forecast to a slightly greater growth cadence.
It is a bit more than execution
Yes, sales execution is an important component of the operational performance of any IT vendor. But many other factors have been playing out in the company's comeback. And many of these factors are likely to continue to persist in coming quarters and to become more visible.
There are a few trends that seem to be worth noting. Most of them can be seen in tabular and graphic form in this link. Q4 results, not shown here, showed better growth and further market share gains for NetApp. One key metric is that the percentage revenue declines in the overall storage market has been significantly less in 2016 than in prior years. Not growth, certainly, but less shrinkage. Storage capacity growth is actually continuing at strong levels in the low to mid 30% range. As has been the case for a long time now, pricing remains cutthroat and challenging.
Many investors, aided and abetted by some analysts, believe that storage is a moribund space populated by losers and struggling survivors. That is more than a bit of an exaggeration. What is a more realistic picture of the environment is that "customers are maximizing the value of their data by prioritizing investments to modernize their data centers, increase agility, and integrate cloud resources with on premise environments. Clustered ONTAP allows customers to modernize their infrastructure by replacing stand-alone silos of storage and monolithic frame arrays with a scale-out software-defined storage platform."
I don't think that anyone expects the enterprise storage space to be a consistent growth area in the future. Neither is it moribund with no prospects and no place in a cloud-centric world. There are companies of note that are abandoning their data centers. By far, the most common storage strategy in the enterprise these days is for users to combine their own private storage and connect that to services offered by public cloud providers. By now, it seems clear that hybrid approaches seem to be the most popular form of deployment and the market opportunity for storage vendors is to work in that hybrid space.
Another trend of note is that the Dell/EMC merger has led to significant share losses for that vendor, which have continued at significant rates. The specifics of the transaction, with its use of lots of debt, have in turn led to operational issues that have created opportunities for the other vendors in the space. Simply put, the mountain of debt that was part of the deal structure is making it hard for the new business to fund required investments and is providing opportunities that have lifted lots of boats, including the one in which NetApp is traveling.
IBM (NYSE:IBM) as well, continues to suffer significant market share losses, as its overall hardware businesses continues to contract and resources are shifted to more promising areas. It's a lot smaller opportunity when compared to EMC/Dell, but still a significant opportunity for vendors such as NetApp.
Another trend is the rapid emergence of flash as the mainstream storage technology. Flash grew by 61% in the IDC survey cited here. In the survey, flash was about 20% of overall storage revenues. NetApp now gets almost half of its product revenues from flash and continues to achieve triple-digit growth in that technology.
There can be advantages in being late to a party. It seems likely, that NetApp, having come late to the realization of the winds of change blowing through the storage space due to the emergence of the cloud and to the rapid replacement of spinning disc by flash, has been able to find specific flash deployments that optimize user performance in the hybrid cloud. And it has been able to use its clustered ONTAP architecture, incorporating flash, as part of its messaging and its ability to differentiate itself from its competitors.
The results have been that the company is growing faster than the flash market as a whole and flash is a substantially greater proportion of its business than the industry average. The question for investors, simply put, is will those highly favorable trends persist, and if so, for how much longer and at what cadence. The growth that NetApp is achieving in flash is greater than the growth the largest flash-only storage vendor Pure (NYSE:PSTG) is achieving and overall, NetApp is a larger factor in the flash market than is Pure.
There are more claims in the storage market than used car salesmen have ever thought of articulating - or for that matter, perhaps the comparison is better by speaking of the extraordinary claims made by some of the political classes and the press. NetApp has often talked about storage efficiency advantages… and while it has done a better job of articulating its claims than competitors, it is not clear how storage efficiency really works out in the real world. Just for the record, the company is providing users a workload-specific guarantee that scales to a 5 to 1 data reduction ratio, supposedly the best in class. As an analyst trying to cover the large enterprise storage companies, I am not the individual who can separate out all the rather strident claims regarding one flash technology compared to another. But the results that NetApp has been reporting in flash go quite a bit beyond what might be reasonably ascribed to sales execution.
Where does the company go from here?
There are several moving parts in the NetApp engine. But the one that is most intriguing, and which has the potential to substantially move the growth needle, is the direction articulated by company CEO George Kurian. Mr. Kurian, during the course of his prepared remarks talked about the potential in the hyper-converged space. He said that, "We will do what has not yet been done by the immature first generation of hyper-converged solutions, bringing hyper converged infrastructure to the enterprise by allowing customers the flexibility to run multiple workloads without compromising performance, scale of efficiency."
About a year ago, NetApp acquired SolidFire, in a deal that was not admired at the time. SolidFire's technology will be the foundation of this new offering. SolidFire seems to have been a reasonable success for NetApp thus far, and it has the potential to bring a substantial change to NetApp's center of gravity.
A next-generation, enterprise class hyper-converged solution will be a big deal if NetApp really has that ready early in fiscal 2018. During the call, Mr. Kurian observed that the company would make some exciting announcements in the hyper-converged space early in the coming fiscal year. Excitement, to be sure, is in the eyes of the beholder, but it could just be, given the great success of Nutanix (NASDAQ:NTNX) and the strong share price performance of that company, that a product announcement that laps Nutanix might really be exciting - at least for shareholders, including this writer.
It is worth noting, I think, that the current industry leader in the hyper-converged space is Nutanix. The company is approaching the $1 billion revenue run rate, and it is true that most of its deployments have been in branches and regions of larger enterprises. Bringing hyper-converged to the heart of the enterprise would be a tremendous accomplishment and would likely change the growth calculus for NetApp. VMWare/Dell (NYSE:VMW) also has a significant stake in the hyper-converged space on which I focused in my latest review of that business. Hewlett Packard Enterprise (NYSE:HPE) announced about a month ago that it is buying SimpliVity (Private:SIMP), which is said to be another entrant in the race to sell hyper-converged enterprise solutions. What all this focus on hyper-converged might do to Nutanix is another story not for exploration in this article. Overall, the Hyper-converged market is supposed to reach a value of $12.6 billion by 2022 with a CAGR of 43% between now and that time. It certainly represents a significant opportunity for NetApp that is really not considered by most investors when they think of the possible growth rate for the company.
But of course the core of the company's current business is selling its AFA and hybrid flash set of solutions. The growth the company achieved this past quarter suggests that it is gaining market share on a broad front. Part of that has to do with the on-going disruption at EMC and the feature-poor offerings of HPE. Management believes it is gaining on flash-specialist Pure. The folks at Pure have expressed a different point of view based on some of their technologies - particularly what is called NVMe. That is not a dispute that I could possibly resolve. NetApp already offers storage products that include integrated NVMe. I have no idea if NetApp's NVMe is better, worse or the same as that bragged about by Pure.
In the very short term, there appears to be a shortage of NAND memory that has resulted in somewhat higher component prices. This shortage has been a factor that has lead to a declined gross margins the company reported in its most recent quarter. Expectations are that the shortage of NAND will ease later this year, and this will definitely create a tailwind for NetApp gross margins. As a result of the shortage, and higher prices for AFA, some larger users have elected to use hybrid-flash which NetApp can offer seamlessly due to single operating system.
Like many companies, NetApp has had a history of trying to encapsulate much of its future direction in the course of annual analyst meetings. But I think the gist is that the company will talk about achieving a consistent, but moderate growth rate with an emphasis on profitability and cash generation coupled with a shareholder-friendly capital allocation program. Given current valuations, much more than that, on the order of hitting a home run in the hyper-converged market, would be lagniappe, and quite tasty and nutritious at that. The shares are barely priced to achieve moderate growth and as mentioned above, almost half of the analysts that cover this name do not even believe the company can accomplish that.
When I first wrote about NetApp on this site a year ago, as mentioned earlier, the shares were less than $25, and today they are closing in on $41. The question many readers may have is, are the shares still a value. I think they are, but quite clearly not on the basis of EV/S that obtained a year ago. Back then, NetApp was still blowing quarters regularly; now it has started to beat and rise on a consistent basis.
In my world, EV/S is important, but some level of growth is equally as important. A year ago, NetApp was still shrinking, and the path to growth was far murkier than it is after the company achieved a quarter in which product sales showed a small increase. I also find companies that have low EV/S ratios but are losing market share to have a high potential to be value traps. At this point, the company is now achieving significant market share gains in the most important market in its space. Had anyone suggested that the company would be selling more AFA than PSTG a year ago, they would have been greeted with derision. Things like that should mean something significant, even to value investors.
So, is the company more of a value now, than it was a year ago? I think it is. Many readers will have different opinions.
At the current time, the company has reduced its outstanding shares to 281 million. At today's closing price just shy of $41, that produces a market capitalization of $11.5 billion. The company has a net cash balance of $2.7 billion. That yields an enterprise value of $8.8 billion. The consensus forecast for revenue over the next 12 months is $5.5 billion. That yields an EV/S of 1.6X. Not less than 1X to be sure, but the company is showing a significant improvement in both profitability and growth, which ought to be worth something. Management is proving its chops and that is worth far more than just a statistical metric. The increase in product revenue is the first one NetApp has achieved on a yearly basis for the last several years - since prior to fiscal year 2012. It is not terribly surprising that investors are willing to pay for growth, even if it is marginal growth as opposed to years of decline.
The consensus earnings estimate for the next 12 months is now about $2.95. It is my guess that management will call for a more significant increase in profitability during its upcoming analyst day. But using $2.95 as the denominator yields a P/E of 14X. This will also be the first year for EPS growth since prior to fiscal-year 2012. In that year, EPS was $2.41 and the shares traded as high as $46. Lots of water under many bridges since back then.
As mentioned earlier, cash flow from operations declined in the latest quarter although CFFO has been flat for the full year. It can be difficult to forecast CFFO for any company in a given quarter. Looking at Q4 2016, it seems likely to me that CFFO will increase substantially from the levels the company attained last year. Last year, GAAP net income was negligible for the quarter and this year it is forecast to be about $175 million. In addition, last quarter reflected a significant inventory build in NAND to insure the company had adequate supplies to satisfy demand for AFA over the next two quarters. Considering these factors, I think that for the year CFFO will reach $1 billion, or perhaps a little higher. CapEx has been running at consistent rates throughout the year and should probably fall in the range of $185-$190 million. So, my expectation is that free cash flow will $825 million-$850, providing a free cash flow yield of 9.5%.
I believe that these metrics are indicative of a company still being valued as a name without material growth potential in a market seen as close to toxic. Are NetApp shares still cheap? I think so and suggest that the execution the company has achieved over the past 4 quarters has made them cheaper. And I will be interested to see just how the company's management looks at the company's longer-term business model on its analyst day and equally interested to look at the company's "next generation" announcement of an enterprise class hyper-converged solution. Despite the share price appreciation over the past year, I believe that the company still offers investors a significant amount of positive alpha - and while not putting many eggs in the basket of any particular product launch, if the company pulls off what its CEO has set as a goal, it will have as much growth potential as any storage hardware name out there.
Disclosure: I am/we are long NTAP.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

How the Internet of Things Will Change Cloud Computing

The Internet of Things (IoT) global market is still at its infancy and positioned for exponential growth over the next couple of years as people continue to connect to the internet through billions of devices and applications. As an MSP that provides cloud-based file sharing, IoT needs to be on your radar.
Here’s a look at how MSPs will be able to leverage the cloud as the only platform and service flexible, scalable, and analytic enough to support the IoT as it grows.
Just how big is the IoT ecosystem?
The IoT ecosystem includes any form of technology that can connect to the internet. This means connected cars, wearables, TVs, smartphones, fitness equipment, robots, ATMs, vending machines, and all of the vertical applications, security and professional services, analytics and platforms that come with them.
And this is only the beginning. As this infographic by Intel illustrates, the IoT is predicted to have 4 billion people using 31 billion devices by 2020, nearly doubling the amount of connected technology we see now. 
The goal of the IoT is to make these applications, services, and devices as ubiquitous as possible, all while enabling  the gathering of vast quantities of data about user and consumer preferences.
As the IoT expands, so will cloud computing in the following ways.
1.      Startups:
Given the amount of innovation evolving out of the IoT, you can expect to see many more start-ups offering new devices and services, which is great for cloud vendors.
Startups often embrace the cloud because of its “no upfront payment necessary” model. SaaS-enabled enterprise level applications allow smaller businesses to use sophisticated software for project and customer relationship management.
2.      Developing countries:
Much of the cloud growth we see is actually the result of developing countries that have been slow to adopt the cloud. In fact, 90 percent of the revenue generated from the IOT has come from developing countries.
Although this percentage is expected to wane once these counties have finished playing “catch-up”, developing countries are still a great market for cloud growth.
3.      Analytics and advertising.
Data analytics will become even more accurate in predicting consumer preference and behavior.

The IoT will dramatically change the way we live our daily lives and what information is stored about us. How do you believe the cloud might evolve as the IOT does? Leave a comment in the section below.  

How Cloud is Changing the Colocation Data Center Market

Hyperscale cloud providers are sucking more and more customer workloads away from data center providers, while gobbling up more and more data center capacity to host those workloads, changing in a big way the dynamics in the global colocation data center market.
One big result is that growth in retail colocation is slowing, while growth in the wholesale data center market is accelerating, according to the latest report by Structure Research. The analysts project a growth rate of 14.3 percent for retail colocation from 2016 to 2017 and 17.9 percent for wholesale; retail colocation services currently have 75 percent market share, with wholesale responsible for the rest.
The global colocation market size reached $33.59 billion in 2016, including both retail and wholesale services, Structure estimates. The firm expects it to grow 15.2 percent this year.
Here’s how total colocation data center market revenue is split among regions (chart courtesy of Structure Research):
Numerous factors are responsible for the changes in growth rates between wholesale and retail, but the role of massive-scale public clouds by the likes of Amazon and Microsoft is the biggest one, according to Structure. Microsoft, for example, last year signed leases totaling more than 125 MW of data center capacity in the US alone, according to the commercial real estate firm North American Data Centers.
In its attempt to catch up to rivals, Oracle leased more than 30 MW in seven wholesale data center deals in the US in 2016. While nowhere near the capacity Microsoft took down, this was a lot more than Oracle had leased in the past. The company recently launched cloud availability regions in Northern Virginia, London, and Turkey, following the launch of its first region in the Phoenix market. Each region starts with a multi-megawatt two- or three-site deployment, and Oracle is nowhere near being done with expanding the geographic reach of its new cloud platform.
About two-thirds of the nearly 30 largest data center leases signed in 2016 in North America were signed by hyperscale cloud service providers, according to NADC. In addition to Microsoft and Oracle, they included Salesforce, IBM SoftLayer, and Box.
Read more: Who Leased the Most Data Center Space in 2016?
Growth Slowing Down for Smaller Players
The trend doesn’t mean the retail colocation data center market is declining, Structure pointed out. It remains a healthy market that’s “on a positive growth trajectory.”
Most of the growth, however, is concentrated at the top of the market, driven by the largest providers, Jabez Tan, research director at Structure, who co-authored the report, said in an interview with Data Center Knowledge. The bottom and middle of the market are seeing growth slow down as cloud providers chip away at the overall retail colocation revenue.
The addressable market for smaller colocation providers who aren’t operating at multi-region scale is shrinking, as they are essentially targeting local small and mid-size businesses, and those businesses are prime candidates for moving applications to the cloud. Many smaller providers have been trying to accelerate revenue growth by adding more sophisticated managed services capabilities, Tan pointed out, but “a lot of them are not growing as fast.”
From Carrier Neutrality to Cloud Neutrality
Another major change being forced by the cloud is a shift of focus from carrier neutrality in colocation data centers to cloud neutrality. As more and more enterprises move workloads to the cloud, data center companies expect them to want to use multiple cloud providers, so offering easy access to as many clouds as possible has become a big part of the strategy for colocation providers.
Some providers (especially Equinix) have been talking about the need to enable multi-cloud strategies for data center customers for several years now. However, there is no evidence that actual multi-cloud deployments are taking place en masse. For now, multi-cloud appears to be more of a table-stakes effort for colo providers. “I’m seeing it in very early stages,” Tan said. “It’ll start to develop over time.”
Asia Pacific Expected to Overtake North America
Another future development Structure is projecting is that Asia Pacific will soon outgrow North America in terms of colocation data center market share.
China will continue driving most of the growth, but the region is replete with emerging markets, such as Malaysia and Thailand, feeding on the momentum in mature markets, such as India, Japan, Singapore, and Australia, Tan explained. This combination will result in Asia Pacific outgrowing North America’s colocation data center market share by 2020, according to Structure’s projections.
Here’s how Structure estimates the balance of power in the global colocation market will shift over the next three years: