An entity–attribute–value model (EAV) is a data model optimized for the space-efficient storage of sparse—or ad-hoc—property or data values, intended for situations where runtime usage patterns are arbitrary, subject to user variation, or otherwise unforeseeable using a fixed design. The use-case targets applications which offer a large or rich system of defined property types, which are in turn appropriate to a wide set of entities, but where typically only a small, specific selection of these are instantiated (or persisted) for a given entity. Therefore, this type of data model relates to the mathematical notion of a sparse matrix.EAV is also known as object–attribute–value model, vertical database model, and open schema.
This data representation is analogous to space-efficient methods of storing a sparse matrix, where only non-empty values are stored. In an EAV data model, each attribute–value pair is a fact describing an entity, and a row in an EAV table stores a single fact. EAV tables are often described as "long and skinny": "long" refers to the number of rows, "skinny" to the few columns.
Data is recorded as three columns:
Consider how one would try to represent a general-purpose clinical record in a relational database. Clearly creating a table (or a set of tables) with thousands of columns is not feasible, because the vast majority of columns would be null. To complicate things, in a longitudinal medical record that follows the patient over time, there may be multiple values of the same parameter: the height and weight of a child, for example, change as the child grows. Finally, the universe of clinical findings keeps growing: for example, diseases emerge and new lab tests are devised; this would require constant addition of columns, and constant revision of the user interface. The term "attribute volatility" is sometimes used to describe the problems or situations that arise when the list of available attributes or their definitions needs to evolve over time.
The following shows a selection of rows of an EAV table for clinical findings from a visit to a doctor for a fever on the morning of 1998-05-01. The entries shown within angle brackets are references to entries in other tables, shown here as text rather than as encoded foreign key values for ease of understanding. In this example, the values are all literal values, but they could also be pre-defined value lists. The latter are particularly useful when the possible values are known to be limited (i.e., enumerable).
The example below illustrates symptoms findings that might be seen in a patient with pneumonia.
scope=col | Entity | Attribute | Value |
---|---|---|---|
⟨Patient XYZ, 1998-05-01 09:30⟩ | ⟨Temperature in degrees Celsius⟩ | "38.9" | |
⟨Patient XYZ, 1998-05-01 09:30⟩ | ⟨Presence of cough⟩ | "True" | |
⟨Patient XYZ, 1998-05-01 09:30⟩ | ⟨Type of Cough⟩ | "With phlegm, yellowish, streaks of blood" | |
⟨Patient XYZ, 1998-05-01 09:30⟩ | ⟨Heart rate in beats per minute⟩ | "98" |
Row modeling, where facts about something (in this case, a sales transaction) are recorded as multiple rows rather than multiple columns, is a standard data modeling technique. The differences between row modeling and EAV (which may be considered a generalization of row-modeling) are:
In a clinical data repository, row modeling also finds numerous uses; the laboratory test subschema is typically modeled this way, because lab test results are typically numeric, or can be encoded numerically.
The circumstances where you would need to go beyond standard row-modeling to EAV are listed below:
Certain ("hybrid") classes have some attributes that are non-sparse (present in all or most instances), while other attributes are highly variable and sparse. The latter are suitable for EAV modeling. For example, descriptions of products made by a conglomerate corporation depend on the product category, e.g., the attributes necessary to describe a brand of light bulb are quite different from those required to describe a medical imaging device, but both have common attributes such as packaging unit and per-item cost.
In clinical data, the entity is typically a clinical event, as described above. In more general-purpose settings, the entity is a foreign key into an "objects" table that records common information about every "object" (thing) in the database – at the minimum, a preferred name and brief description, as well as the category/class of entity to which it belongs. Every record (object) in this table is assigned a machine-generated object ID.
The "objects table" approach was pioneered by Tom Slezak and colleagues at Lawrence Livermore Laboratories for the Chromosome 19 database, and is now standard in most large bioinformatics databases. The use of an objects table does not mandate the concurrent use of an EAV design: conventional tables can be used to store the category-specific details of each object.
The major benefit to a central objects table is that, by having a supporting table of object synonyms and keywords, one can provide a standard Google-like search mechanism across the entire system where the user can find information about any object of interest without having to first specify the category that it belongs to. (This is important in bioscience systems where a keyword like "acetylcholine" could refer either to the molecule itself, which is a neurotransmitter, or the biological receptor to which it binds.)
In the EAV table itself, this is just an attribute ID, a foreign key into an Attribute Definitions table, as stated above. However, there are usually multiple metadata tables that contain attribute-related information, and these are discussed shortly.
Coercing all values into strings, as in the EAV data example above, results in a simple, but non-scalable, structure: constant data type inter-conversions are required if one wants to do anything with the values, and an index on the value column of an EAV table is essentially useless. Also, it is not convenient to store large binary data, such as images, in Base64 encoded form in the same table as small integers or strings. Therefore, larger systems use separate EAV tables for each data type (including binary large objects, "BLOBS"), with the metadata for a given attribute identifying the EAV table in which its data will be stored. This approach is actually quite efficient because the modest amount of attribute metadata for a given class or form that a user chooses to work with can be cached readily in memory. However, it requires moving of data from one table to another if an attribute’s data type is changed.
EAV, as a general-purpose means of knowledge representation, originated with the concept of "association lists" (attribute–value pairs). Commonly used today, these were first introduced in the language LISP. Attribute–value pairs are widely used for diverse applications, such as configuration files (using a simple syntax like attribute = value). An example of non-database use of EAV is in UIMA (Unstructured Information Management Architecture), a standard now managed by the Apache Foundation and employed in areas such as natural language processing. Software that analyzes text typically marks up ("annotates") a segment: the example provided in the UIMA tutorial is a program that performs named-entity recognition (NER) on a document, annotating the text segment "President Bush" with the annotation–attribute–value triple (Person, Full_Name, "George W. Bush").[1] Such annotations may be stored in a database table.
While EAV does not have a direct connection to AV-pairs, Stead and Hammond appear to be the first to have conceived of their use for persistent storage of arbitrarily complex data.The first medical record systems to employ EAV were the Regenstrief electronic medical record (the effort led by Clement MacDonald), William Stead and Ed Hammond's TMR (The Medical Record) system and the HELP Clinical Data Repository (CDR) created by Homer Warner's group at LDS Hospital, Salt Lake City, Utah.[2] (The Regenstrief system actually used a Patient-Attribute-Timestamp-Value design: the use of the timestamp supported retrieval of values for a given patient/attribute in chronological order.) All these systems, developed in the 1970s, were released before commercial systems based on E.F. Codd's relational database model were available, though HELP was much later ported to a relational architecture and commercialized by the 3M corporation. (Note that while Codd's landmark paper was published in 1970, its heavily mathematical tone had the unfortunate effect of diminishing its accessibility among non-computer-science types and consequently delaying the model's acceptance in IT and software-vendor circles. The value of the subsequent contribution of Christopher J. Date, Codd's colleague at IBM, in translating these ideas into accessible language, accompanied by simple examples that illustrated their power, cannot be overstated.)
A group at the Columbia-Presbyterian Medical Center was the first to use a relational database engine as the foundation of an EAV system.
The open-source TrialDB clinical study data management system of Nadkarni et al. was the first to use multiple EAV tables, one for each DBMS data type.
The EAV/CR framework, designed primarily by Luis Marenco and Prakash Nadkarni, overlaid the principles of object orientation onto EAV; it built on Tom Slezak's object table approach (described earlier in the "Entity" section). SenseLab, a publicly accessible neuroscience database, is built with the EAV/CR framework.
The term "EAV database" refers to a database design where a significant proportion of the data is modeled as EAV. However, even in a database described as "EAV-based", some tables in the system are traditional relational tables.
As noted above, EAV modeling makes sense for categories of data, such as clinical findings, where attributes are numerous and sparse. Where these conditions do not hold, standard relational modeling (i.e., one column per attribute) is preferable; using EAV does not mean abandoning common sense or principles of good relational design. In clinical record systems, the subschemas dealing with patient demographics and billing are typically modeled conventionally. (While most vendor database schemas are proprietary, VistA, the system used throughout the United States Department of Veterans Affairs (VA) medical system, known as the Veterans Health Administration (VHA),[3] is open-source and its schema is readily inspectable, though it uses a MUMPS database engine rather than a relational database.)
As discussed shortly, an EAV database is essentially unmaintainable without numerous supporting tables that contain supporting metadata. The metadata tables, which typically outnumber the EAV tables by a factor of at least three or more, are typically standard relational tables. An example of a metadata table is the Attribute Definitions table mentioned above.
In a simple EAV design, the values of an attribute are simple or primitive data types as far as the database engine is concerned. However, in EAV systems used for the representation of highly diverse data, it is possible that a given object (class instance) may have substructure: that is, some of its attributes may represent other kinds of objects, which in turn may have substructure, to an arbitrary level of complexity. A car, for example, has an engine, a transmission, etc., and the engine has components such as cylinders. (The permissible substructure for a given class is defined within the system's attribute metadata, as discussed later. Thus, for example, the attribute "random-access-memory" could apply to the class "computer" but not to the class "engine".)
To represent substructure, one incorporates a special EAV table where the value column contains references to other entities in the system (i.e., foreign key values into the objects table). To get all the information on a given object requires a recursive traversal of the metadata, followed by a recursive traversal of the data that stops when every attribute retrieved is simple (atomic). Recursive traversal is necessary whether details of an individual class are represented in conventional or EAV form; such traversal is performed in standard object–relational systems, for example. In practice, the number of levels of recursion tends to be relatively modest for most classes, so the performance penalties due to recursion are modest, especially with indexing of object IDs.
EAV/CR (EAV with Classes and Relationships) refers to a framework that supports complex substructure. Its name is somewhat of a misnomer: while it was an outshoot of work on EAV systems, in practice, many or even most of the classes in such a system may be represented in standard relational form, based on whether the attributes are sparse or dense. EAV/CR is really characterized by its very detailed metadata, which is rich enough to support the automatic generation of browsing interfaces to individual classes without having to write class-by-class user-interface code. The basis of such browser interfaces is that it is possible to generate a batch of dynamic SQL queries that is independent of the class of the object, by first consulting its metadata and using metadata information to generate a sequence of queries against the data tables, and some of these queries may be arbitrarily recursive. This approach works well for object-at-a-time queries, as in Web-based browsing interfaces where clicking on the name of an object brings up all details of the object in a separate page: the metadata associated with that object's class also facilitates the presentation of the object's details, because it includes captions of individual attributes, the order in which they are to be presented as well as how they are to be grouped.
One approach to EAV/CR is to allow columns to hold JSON structures, which thus provide the needed class structure. For example, PostgreSQL, as of version 9.4, offers JSON binary column (JSONB) support, allowing JSON attributes to be queried, indexed and joined.
In the words of Prof. Dr. Daniel Masys (formerly Chair of Vanderbilt University's Medical Informatics Department), the challenges of working with EAV stem from the fact that in an EAV database, the "physical schema" (the way data are stored) is radically different from the "logical schema" – the way users, and many software applications such as statistics packages, regard it, i.e., as conventional rows and columns for individual classes. (Because an EAV table conceptually mixes apples, oranges, grapefruit and chop suey, if you want to do any analysis of the data using standard off-the-shelf software, in most cases you have to convert subsets of it into columnar form. The process of doing this, called pivoting, is important enough to be discussed separately.)
Metadata helps perform the sleight of hand that lets users interact with the system in terms of the logical schema rather than the physical: the software continually consults the metadata for various operations such as data presentation, interactive validation, bulk data extraction and ad hoc query. The metadata can actually be used to customize the behavior of the system.
EAV systems trade off simplicity in the physical and logical structure of the data for complexity in their metadata, which, among other things, plays the role that database constraints and referential integrity do in standard database designs. Such a tradeoff is generally worthwhile, because in the typical mixed schema of production systems, the data in conventional relational tables can also benefit from functionality such as automatic interface generation. The structure of the metadata is complex enough that it comprises its own subschema within the database: various foreign keys in the data tables refer to tables within this subschema. This subschema is standard-relational, with features such as constraints and referential integrity being used to the hilt.
The correctness of the metadata contents, in terms of the intended system behavior, is critical and the task of ensuring correctness means that, when creating an EAV system, considerable design efforts must go into building user interfaces for metadata editing that can be used by people on the team who know the problem domain (e.g., clinical medicine) but are not necessarily programmers. (Historically, one of the main reasons why the pre-relational TMR system failed to be adopted at sites other than its home institution was that all metadata was stored in a single file with a non-intuitive structure. Customizing system behavior by altering the contents of this file, without causing the system to break, was such a delicate task that the system's authors only trusted themselves to do it.)
Where an EAV system is implemented through RDF, the RDF Schema language may conveniently be used to express such metadata. This Schema information may then be used by the EAV database engine to dynamically re-organize its internal table structure for best efficiency.
Some final caveats regarding metadata:
Validation, presentation and grouping metadata make possible the creation of code frameworks that support automatic user interface generation for both data browsing as well as interactive editing. In a production system that is delivered over the Web, the task of validation of EAV data is essentially moved from the back-end/database tier (which is powerless with respect to this task) to the middle /Web server tier. While back-end validation is always ideal, because it is impossible to subvert by attempting direct data entry into a table, middle tier validation through a generic framework is quite workable, though a significant amount of software design effort must go into building the framework first. The availability of open-source frameworks that can be studied and modified for individual needs can go a long way in avoiding wheel reinvention.
(The first part of this section is a précis of the Dinu/Nadkarni reference article in Central, to which the reader is directed for more details.)
EAV modeling, under the alternative terms "generic data modeling" or "open schema", has long been a standard tool for advanced data modelers. Like any advanced technique, it can be double-edged, and should be used judiciously.
Also, the employment of EAV does not preclude the employment of traditional relational database modeling approaches within the same database schema. In EMRs that rely on an RDBMS, such as Cerner, which use an EAV approach for their clinical-data subschema, the vast majority of tables in the schema are in fact traditionally modeled, with attributes represented as individual columns rather than as rows.
The modeling of the metadata subschema of an EAV system, in fact, is a very good fit for traditional modeling, because of the inter-relationships between the various components of the metadata. In the TrialDB system, for example, the number of metadata tables in the schema outnumber the data tables by about ten to one. Because the correctness and consistency of metadata is critical to the correct operation of an EAV system, the system designer wants to take full advantage of all of the features that RDBMSs provide, such as referential integrity and programmable constraints, rather than having to reinvent the RDBMS-engine wheel. Consequently, the numerous metadata tables that support EAV designs are typically in third-normal relational form.
Commercial electronic health record Systems (EHRs) use row-modeling for classes of data such as diagnoses, surgical procedures performed on and laboratory test results, which are segregated into separate tables. In each table, the "entity" is a composite of the patient ID and the date/time the diagnosis was made (or the surgery or lab test performed); the attribute is a foreign key into a specially designated lookup table that contains a controlled vocabulary - e.g., ICD-10 for diagnoses, Current Procedural Terminology for surgical procedures, with a set of value attributes. (E.g., for laboratory-test results, one may record the value measured, whether it is in the normal, low or high range, the ID of the person responsible for performing the test, the date/time the test was performed, and so on.) As stated earlier, this is not a full-fledged EAV approach because the domain of attributes for a given table is restricted, just as the domain of product IDs in a supermarket's Sales table would be restricted to the domain of Products in a Products table.
However, to capture data on parameters that are not always defined in standard vocabularies, EHRs also provide a "pure" EAV mechanism, where specially designated power-users can define new attributes, their data type, maximum and minimal permissible values (or permissible set of values/codes), and then allow others to capture data based on these attributes. In the Epic (TM) EHR, this mechanism is termed "Flowsheets", and is commonly used to capture inpatient nursing observation data.
The typical case for using the EAV model is for highly sparse, heterogeneous attributes, such as clinical parameters in the electronic medical record (EMRs), as stated above. Even here, however, it is accurate to state that the EAV modeling principle is applied to a sub-schema of the database rather than for all of its contents. (Patient demographics, for example, are most naturally modeled in one-column-per-attribute, traditional relational structure.)
Consequently, the arguments about EAV vs. "relational" design reflect incomplete understanding of the problem: An EAV design should be employed only for that sub-schema of a database where sparse attributes need to be modeled: even here, they need to be supported by third normal form metadata tables. There are relatively few database-design problems where sparse attributes are encountered: this is why the circumstances where EAV design is applicable are relatively rare. Even where they are encountered, a set of EAV tables is not the only way to address sparse data: an XML-based solution (discussed below) is applicable when the maximum number of attributes per entity is relatively modest, and the total volume of sparse data is also similarly modest. An example of this situation is the problems of capturing variable attributes for different product types.
Sparse attributes may also occur in E-commerce situations where an organization is purchasing or selling a vast and highly diverse set of commodities, with the details of individual categories of commodities being highly variable.
Another application of EAV is in modeling classes and attributes that, while not sparse, are dynamic, but where the number of data rows per class will be relatively modest – a couple of hundred rows at most, but typically a few dozen – and the system developer is also required to provide a Web-based end-user interface within a very short turnaround time. "Dynamic" means that new classes and attributes need to be continually defined and altered to represent an evolving data model. This scenario can occur in rapidly evolving scientific fields as well as in ontology development, especially during the prototyping and iterative refinement phases.
While the creation of new tables and columns to represent a new category of data is not especially labor-intensive, the programming of Web-based interfaces that support browsing or basic editing with type- and range-based validation is. In such a case, a more maintainable long-term solution is to create a framework where the class and attribute definitions are stored in metadata, and the software generates a basic user interface from this metadata dynamically.
The EAV/CR framework, mentioned earlier, was created to address this very situation. Note that an EAV data model is not essential here, but the system designer may consider it an acceptable alternative to creating, say, sixty or more tables containing a total of not more than two thousand rows. Here, because the number of rows per class is so few, efficiency considerations are less important; with the standard indexing by class ID/attribute ID, DBMS optimizers can easily cache the data for a small class in memory when running a query involving that class or attribute.
In the dynamic-attribute scenario, it is worth noting that Resource Description Framework (RDF) is being employed as the underpinning of Semantic-Web-related ontology work. RDF, intended to be a general method of representing information, is a form of EAV: an RDF triple comprises an object, a property, and a value.
At the end of Jon Bentley's book "Writing Efficient Programs", the author warns that making code more efficient generally also makes it harder to understand and maintain, and so one does not rush in and tweak code unless one has first determined that there is a performance problem, and measures such as code profiling have pinpointed the exact location of the bottleneck. Once you have done so, you modify only the specific code that needs to run faster. Similar considerations apply to EAV modeling: you apply it only to the sub-system where traditional relational modeling is known a priori to be unwieldy (as in the clinical data domain), or is discovered, during system evolution, to pose significant maintenance challenges. Database Guru (and currently a vice-president of Core Technologies at Oracle Corporation) Tom Kyte,[5] for example, correctly points out drawbacks of employing EAV in traditional business scenarios, and makes the point that mere "flexibility" is not a sufficient criterion for employing EAV. (However, he makes the sweeping claim that EAV should be avoided in all circumstances, even though Oracle's Health Sciences division itself employs EAV to model clinical-data attributes in its commercial systems ClinTrial[6] and Oracle Clinical.[7])
The Achilles heel of EAV is the difficulty of working with large volumes of EAV data. It is often necessary to transiently or permanently inter-convert between columnar and row-or EAV-modeled representations of the same data; this can be both error-prone if done manually as well as CPU-intensive. Generic frameworks that utilize attribute and attribute-grouping metadata address the former but not the latter limitation; their use is more or less mandated in the case of mixed schemas that contain a mixture of conventional-relational and EAV data, where the error quotient can be very significant.
The conversion operation is called pivoting. Pivoting is not required only for EAV data but also for any form of row-modeled data. (For example, implementations of the Apriori algorithm for Association Analysis, widely used to process supermarket sales data to identify other products that purchasers of a given product are also likely to buy, pivot row-modeled data as a first step.) Many database engines have proprietary SQL extensions to facilitate pivoting, and packages such as Microsoft Excel also support it. The circumstances where pivoting is necessary are considered below.
However, the structure of EAV data model is a perfect candidate for Relational Division, see relational algebra. With a good indexing strategy it's possible to get a response time in less than a few hundred milliseconds on a billion row EAV table. Microsoft SQL Server MVP Peter Larsson has proved this on a laptop and made the solution general available.[8]
Obviously, no matter what approaches you take, querying EAV will not be as fast as querying standard column-modeled relational data for certain types of query, in much the same way that access of elements in sparse matrices are not as fast as those on non-sparse matrices if the latter fit entirely into main memory. (Sparse matrices, represented using structures such as linked lists, require list traversal to access an element at a given X-Y position, while access to elements in matrices represented as 2-D arrays can be performed using fast CPU register operations.) If, however, you chose the EAV approach correctly for the problem that you were trying to solve, this is the price that you pay; in this respect, EAV modeling is an example of a space (and schema maintenance) versus CPU-time tradeoff.
Originally postulated by Maier, Ullman and Vardi,[9] the "Universal Data Model" (UDM) seeks to simplify the query of a complex relational schema by naive users, by creating the illusion that everything is stored in a single giant "universal table". It does this by utilizing inter-table relationships, so that the user does not need to be concerned about what table contains what attribute. C.J. Date, however,[10] pointed out that in circumstances where a table is multiply related to another (as in genealogy databases, where an individual's father and mother are also individuals, or in some business databases where all addresses are stored centrally, and an organization can have different office addresses and shipping addresses), there is insufficient metadata within the database schema to specify unambiguous joins. When UDM has been commercialized, as in SAP BusinessObjects, this limitation is worked around through the creation of "Universes", which are relational views with predefined joins between sets of tables: the "Universe" developer disambiguates ambiguous joins by including the multiply-related table in a view multiple times using different aliases.
Apart from the way in which data is explicitly modeled (UDM simply uses relational views to intercede between the user and the database schema), EAV differs from Universal Data Models in that it also applies to transactional systems, not only query oriented (read-only) systems as in UDM. Also, when used as the basis for clinical-data query systems, EAV implementations do not necessarily shield the user from having to specify the class of an object of interest. In the EAV-based i2b2 clinical data mart, for example, when the user searches for a term, she has the option of specifying the category of data that the user is interested in. For example, the phrase "lithium" can refer either to the medication (which is used to treat bipolar disorder), or a laboratory assay for lithium level in the patient's blood. (The blood level of lithium must be monitored carefully: too much of the drug causes severe side effects, while too little is ineffective.)
An Open Schema implementation can use an XML column in a table to capture the variable/sparse information.[11] Similar ideas can be applied to databases that support JSON-valued columns: sparse, hierarchical data can be represented as JSON. If the database has JSON support, such as PostgreSQL and (partially) SQL Server 2016 and later, then attributes can be queried, indexed and joined. This can offer performance improvements of over 1000x over naive EAV implementations.,[12] but does not necessarily make the overall database application more robust.
Note that there are two ways in which XML or JSON data can be stored: one way is to store it as a plain string, opaque to the database server; the other way is to use a database server that can "see into" the structure. There are obviously some severe drawbacks to storing opaque strings: these cannot be queried directly, one cannot form an index based on their contents, and it is impossible to perform joins based on the content.
Building an application that has to manage data gets extremely complicated when using EAV models, because of the extent of infrastructure that has to be developed in terms of metadata tables and application-framework code. Using XML solves the problem of server-based data validation (which must be done by middle-tier and browser-based code in EAV-based frameworks), but has the following drawbacks:
All of the above drawbacks are remediable by creating a layer of metadata and application code, but in creating this, the original "advantage" of not having to create a framework has vanished. The fact is that modeling sparse data attributes robustly is a hard database-application-design problem no matter which storage approach is used. Sarka's work, however, proves the viability of using an XML field instead of type-specific relational EAV tables for the data-storage layer, and in situations where the number of attributes per entity is modest (e.g., variable product attributes for different product types) the XML-based solution is more compact than an EAV-table-based one. (XML itself may be regarded as a means of attribute–value data representation, though it is based on structured text rather than on relational tables.)
There exist several other approaches for the representation of tree-structured data, be it XML, JSON or other formats, such as the nested set model, in a relational database. On the other hand, database vendors have begun to include JSON and XML support into their data structures and query features, like in IBM Db2, where XML data is stored as XML separate from the tables, using XPath queries as part of SQL statements, or in PostgreSQL, with a JSON data type[13] that can be indexed and queried. These developments accomplish, improve or substitute the EAV model approach.
The uses of JSON and XML are not necessarily the same as the use of an EAV model, though they can overlap. XML is preferable to EAV for arbitrarily hierarchical data that is relatively modest in volume for a single entity: it is not intended to scale up to the multi-gigabyte level with respect to data-manipulation performance. XML is not concerned per-se with the sparse-attribute problem, and when the data model underlying the information to be represented can be decomposed straightforwardly into a relational structure, XML is better suited as a means of data interchange than as a primary storage mechanism. EAV, as stated earlier, is specifically (and only) applicable to the sparse-attribute scenario. When such a scenario holds, the use of datatype-specific attribute–value tables that can be indexed by entity, by attribute, and by value and manipulated through simple SQL statements is vastly more scalable than the use of an XML tree structure. The Google App Engine, mentioned above, uses strongly-typed-value tables for a good reason.
An alternative approach to managing the various problems encountered with EAV-structured data is to employ a graph database. These represent entities as the nodes of a graph or hypergraph, and attributes as links or edges of that graph. The issue of table joins are addressed by providing graph-specific query languages, such as Apache TinkerPop,[14] or the OpenCog atomspace pattern matcher.[15]
Another alternative is to use SPARQL store.
PostgreSQL version 9.4 includes support for JSON binary columns (JSONB), which can be queried, indexed and joined. This allows performance improvements by factors of a thousand or more over traditional EAV table designs.
A DB schema based on JSONB always has fewer tables: one may nest attribute–value pairs in JSONB type fields of the Entity table. That makes the DB schema easy to comprehend and SQL queries concise.[16] The programming code to manipulate the database objects on the abstraction layer turns out much shorter.[17]
Microsoft SQL Server 2008 offers a (proprietary) alternative to EAV.[18] Columns with an atomic data type (e.g., numeric, varchar or datetime columns) can be designated as sparse simply by including the word SPARSE in the column definition of the CREATE TABLE statement. Sparse columns optimize the storage of NULL values (which now take up no space at all) and are useful when the majority records in a table will have NULL values for that column. Indexes on sparse columns are also optimized: only those rows with values are indexed. In addition, the contents of all sparse columns in a particular row of a table can be collectively aggregated into a single XML column (a column set), whose contents are of the form <nowiki>[<column-name>column contents </column-name>]*....</nowiki>
In fact, if a column set is defined for a table as part of a CREATE TABLE statement, all sparse columns subsequently defined are typically added to it. This has the interesting consequence that the SQL statement <nowiki>SELECT * from <tablename></nowiki>
will not return the individual sparse columns, but concatenate all of them into a single XML column whose name is that of the column set (which therefore acts as a virtual, computed column). Sparse columns are convenient for business applications such as product information, where the applicable attributes can be highly variable depending on the product type, but where the total number of variable attributes per product type are relatively modest.
However, this approach to modeling sparse attributes has several limitations: rival DBMSs have, notably, chosen not to borrow this idea for their own engines. Limitations include:
Many cloud computing vendors offer data stores based on the EAV model, where an arbitrary number of attributes can be associated with a given entity. Roger Jennings provides an in-depth comparison of these. In Amazon's offering, SimpleDB, the data type is limited to strings, and data that is intrinsically non-string must be coerced to string (e.g., numbers must be padded with leading zeros) if you wish to perform operations such as sorting. Microsoft's offering, Windows Azure Table Storage, offers a limited set of data types: byte[], bool, DateTime, double, Guid, int, long and string http://msdn.microsoft.com/en-us/library/dd179338.aspx. The Google App Engine https://code.google.com/appengine/docs/whatisgoogleappengine.html offers the greatest variety of data types: in addition to dividing numeric data into int, long, or float, it also defines custom data types such as phone number, E-mail address, geocode and hyperlink. Google, but not Amazon or Microsoft, lets you define metadata that would prevent invalid attributes from being associated with a particular class of entity, by letting you create a metadata model.
Google lets you operate on the data using a subset of SQL; Microsoft offer a URL-based querying syntax that is abstracted via a LINQ provider; Amazon offer a more limited syntax. Of concern, built-in support for combining different entities through joins is currently (April '10) non-existent with all three engines. Such operations have to be performed by application code. This may not be a concern if the application servers are co-located with the data servers at the vendor's data center, but a lot of network traffic would be generated if the two were geographically separated.
An EAV approach is justified only when the attributes that are being modeled are numerous and sparse: if the data being captured does not meet this requirement, the cloud vendors' default EAV approach is often a mismatch for applications that require a true back-end database (as opposed to merely a means of persistent data storage). Retrofitting the vast majority of existing database applications, which use a traditional data-modeling approach, to an EAV-type cloud architecture, would require major surgery. Microsoft discovered, for example, that its database-application-developer base was largely reluctant to invest such effort. In 2010 therefore, Microsoft launched a premium offering, SQL Server Azure, a cloud-accessible, fully-fledged relational engine which allows porting of existing database applications with only modest changes. As of the early 2020s, the service allows standard-tier physical database sizes of up to 8TB,[19] with "hyperscale" and "business-critical" offerings also available.