Draco 1 vs. Draco 2#
Draco 2 builds upon the core idea of Draco 1, that is, using constraints based on Answer Set Programming (ASP) to represent design knowledge about effective visualization designs. However, Draco 2 is a complete rewrite of Draco 1, with various improvements and new features. In short, Draco 2 is completely written in Python and doesn’t need both Python and Node. We still use ASP for the knowledge base and CLingo as the solver. The chart specification format is generalized and extended to support multiple views and view composition. We also improved the test-coverage, documentation, and development tooling.
In this notebook, we compare and contrast the capabilities of Draco 1 and Draco 2 through hands-on examples.
⚠️ This notebook requires a Node.js runtime so that the
draco1
bindings work as expected. Draco 2 does not require non-Python dependencies
# Display utilities
import json
from typing import Callable
from IPython.display import Markdown, display
def md(markdown: str):
display(Markdown(markdown))
def md_json(dct: dict):
md(f"```json\n{json.dumps(dct, indent=2)}\n```")
def run_guarded(func: Callable):
try:
func()
except Exception as e:
md(f'**Error:** <i style="color: red;">{e}</i>')
We’ll be installing a forked version of Draco 1, specifically named draco1
. This is to prevent any conflicts with the currently installed draco
package, which refers to Draco 2 (i.e., this repository). It’s important to note that the draco1
fork doesn’t modify the original functionality of Draco 1 - it’s simply a renaming of the package. This way, we can clearly distinguish between the two versions of Draco for our comparison and we can interact with them within the same notebook.
import draco1 as drc1
import draco as drc2
md(f"Comparing _Draco 1: v{drc1.__version__}_ with _Draco 2: v{drc2.__version__}_")
Comparing Draco 1: v0.0.9 with Draco 2: v2.0.1
API Implementation Comparison#
We set off by comparing and contrasting the APIs of Draco 1 and Draco 2 as well as investigating the features and technical characteristics of the two versions.
✅: Feature is implemented
✓: Feature is implemented, but it is limited in some way compared to the other version
🚫: Feature is not implemented
spec: Refers to a chart specification
ASP: Refers to Answer Set Programming
Draco 1 |
Draco 2 |
|
---|---|---|
Implementation language |
Node.js |
Python |
Execute an ASP problem |
✅ |
✅ |
Access results of a run |
✅ |
✅ |
Check whether an ASP problem is satisfiable |
✅ |
✅ |
List ASP problem violations |
✅ |
✅ |
Show how often a spec violates a preference |
✅ |
✅ |
Generate ASP definitions from data |
✅ |
✅ |
Conversion between spec formats |
✅ ASP ↔️Vega-Lite, CompassQL |
✅ ASP ↔️Nested dictionary, Vega-Lite |
Render recommendations |
✅ |
✅ |
Constraint weight learning |
✓ separate project |
✅ |
Web browser support |
✓ |
✅ |
Compatibility with |
🚫 |
✅ |
Standalone function for completing a partial spec |
🚫 |
✅ |
RESTful interface |
🚫 |
✅ |
Recommendation & constraint weight debugging |
🚫 |
✅ |
API Differences in Practice#
While the last aspect of the comparison table above may seem like only a minor detail for the first sight, it’s important to note that Draco 2 is written entirely in Python, whereas Draco 1 is written in TypeScript and its Python API is only a wrapper around a Node.js subprocess. This means that Draco 2 is much easier to install and use, as it doesn’t require any non-Python dependencies. Furthermore, it provides a much more seamless integration with the Python ecosystem, as it can be easily used in conjunction with other Python libraries without having to worry about serialization issues.
We demonstrate this particular advantage of Draco 2 over Draco 1 in the cells below through a common use case: generating the schema of a dataset in preparation for the generation of recommendations.
We set off by loading the Seattle Weather dataset from the Vega Datasets package. We then use Draco 1 (drc1
) and Draco 2 (drc2
) to generate the schema of the dataset. We represent the data schema as a list of Answer set programming (ASP) rules in both versions of Draco.
import pandas as pd
from vega_datasets import data as vega_data
df: pd.DataFrame = vega_data.seattle_weather()
df.head()
date | precipitation | temp_max | temp_min | wind | weather | |
---|---|---|---|---|---|---|
0 | 2012-01-01 | 0.0 | 12.8 | 5.0 | 4.7 | drizzle |
1 | 2012-01-02 | 10.9 | 10.6 | 2.8 | 4.5 | rain |
2 | 2012-01-03 | 0.8 | 11.7 | 7.2 | 2.3 | rain |
3 | 2012-01-04 | 20.3 | 12.2 | 5.6 | 4.7 | rain |
4 | 2012-01-05 | 1.3 | 8.9 | 2.8 | 6.1 | rain |
Draco 1#
As the cells below show, while Draco 1 exposes the data_to_asp
function to generate the schema of a dataset, it is not directly compatible with a Pandas DataFrame
. What’s more, even after converting the dataframe to a list of dictionaries - under the assumption that it will be JSON serializable without issues - the function still fails to generate the schema due to the fact that the data
column of the dataset is stored as a Timestamp
object, which is not JSON serializable.
We succeed with the schema generation only after converting the date
column to a string of the format YYYY-MM-DD
.
# Attempt to generate the schema of the dataframe directly
run_guarded(lambda: drc1.data_to_asp(df))
Error: Object of type DataFrame is not JSON serializable
# Attempt to generate the schema of the dataframe after converting it to a list of dictionaries
data_records = df.to_dict("records")
run_guarded(lambda: drc1.data_to_asp(data_records))
Error: Object of type Timestamp is not JSON serializable
# Attempt to generate the schema of the dataframe after converting it to a list of dictionaries
# and converting the `date` column to a string of the format `YYYY-MM-DD`
df_serializable = df.copy()
df_serializable["date"] = df_serializable["date"].apply(
lambda x: x.strftime("%Y-%m-%d")
)
data_records = df_serializable.to_dict("records")
drc1.data_to_asp(data_records)
['num_rows(1461).',
'',
'fieldtype("date",string).',
'cardinality("date", 1461).',
'fieldtype("precipitation",number).',
'cardinality("precipitation", 111).',
'fieldtype("temp_max",number).',
'cardinality("temp_max", 67).',
'fieldtype("temp_min",number).',
'cardinality("temp_min", 55).',
'fieldtype("wind",number).',
'cardinality("wind", 79).',
'fieldtype("weather",string).',
'cardinality("weather", 5).',
'']
Draco 2#
Thanks to the fact that Draco 2 is written entirely in Python, it is able to directly accept a Pandas DataFrame
as input for the schema generation without any traces of the issues we encountered with Draco 1.
data_schema = drc2.schema_from_dataframe(df)
drc2.dict_to_facts(data_schema)
['attribute(number_rows,root,1461).',
'entity(field,root,0).',
'attribute((field,name),0,date).',
'attribute((field,type),0,datetime).',
'attribute((field,unique),0,1461).',
'attribute((field,entropy),0,7287).',
'entity(field,root,1).',
'attribute((field,name),1,precipitation).',
'attribute((field,type),1,number).',
'attribute((field,unique),1,111).',
'attribute((field,entropy),1,2422).',
'attribute((field,min),1,0).',
'attribute((field,max),1,55).',
'attribute((field,std),1,6).',
'entity(field,root,2).',
'attribute((field,name),2,temp_max).',
'attribute((field,type),2,number).',
'attribute((field,unique),2,67).',
'attribute((field,entropy),2,3934).',
'attribute((field,min),2,-1).',
'attribute((field,max),2,35).',
'attribute((field,std),2,7).',
'entity(field,root,3).',
'attribute((field,name),3,temp_min).',
'attribute((field,type),3,number).',
'attribute((field,unique),3,55).',
'attribute((field,entropy),3,3596).',
'attribute((field,min),3,-7).',
'attribute((field,max),3,18).',
'attribute((field,std),3,5).',
'entity(field,root,4).',
'attribute((field,name),4,wind).',
'attribute((field,type),4,number).',
'attribute((field,unique),4,79).',
'attribute((field,entropy),4,3950).',
'attribute((field,min),4,0).',
'attribute((field,max),4,9).',
'attribute((field,std),4,1).',
'entity(field,root,5).',
'attribute((field,name),5,weather).',
'attribute((field,type),5,string).',
'attribute((field,unique),5,5).',
'attribute((field,entropy),5,1201).',
'attribute((field,freq),5,714).']
Visualization Specification Language Differences#
To express knowledge about visualizations, we first need a language to describe them. Both versions of Draco use sets of logical facts to describe visualizations and their context. While Draco 1 and Draco 2 share the fundamental approach, the underlying language designs are quite different.
The language used to express visualizations in Draco 1 is based entirely on Vega-Lite, a concise, yet expressive high-level visualization language. Although this choice makes the conversion between the ASP facts and a Vega-Lite spec easy in Draco 1, the design space is bounded by the capabilities of Vega-Lite. Furthermore, the language cannot be extended with user-defined details, making it more rigid overall.
On the other hand, the visualization specification language of Draco 2 was designed with flexibility and extensibility in mind. It can be used to specify all the visualizations that Draco 1 can express, and more, by using a nested specification format based on entities and attributes.
We demonstrate the different approaches Draco 1 and Draco 2 take for specifying visualizations by showing how the same chart can be specified using their languages.
Language Differences in Practice#
The chart to be encoded
Still having the Seattle Weather dataset at hand, let’s suppose that we are interested in how the maximum temperatures across different weather conditions compare. For this very simple analytical task, we can create a bar chart that encodes the weather
field on the x
channel and the mean of the temp_max
field on the y
channel.
import altair as alt
alt.Chart(df).mark_bar().encode(
x=alt.X(field="weather", type="ordinal"),
y=alt.Y(
field="temp_max",
type="quantitative",
aggregate="mean",
scale=alt.Scale(zero=True),
),
)
Draco 1#
As Draco 1 is based on the Vega-Lite specification, the ASP function names and values it uses to declare facts are identical to the JSON attributes in a Vega-Lite specification.
drc1_asp = [
# Use a bar mark
"mark(bar).",
# Declare the existence of our first encoding, identified by `e0`
"encoding(e0).",
# The encoding `e0` uses the `x` channel
"channel(e0,x).",
# The encoding `e0` encodes the `weather` field of the dataset
'field(e0,"weather").',
# The encoding `e0` has the `ordinal` type
"type(e0,ordinal).",
# Declare the existence of our second encoding, identified by `e1`
"encoding(e1).",
# The encoding `e1` uses the `y` channel
"channel(e1,y).",
# The encoding `e1` encodes the `temp_max` field of the dataset
'field(e1,"temp_max").',
# The encoding `e1` has the `quantitative` type
"type(e1,quantitative).",
# The encoding `e1` uses `mean` for aggregation
"aggregate(e1,mean).",
# On the scale of the encoding `e1`, the `zero` attribute is set to `true`
"zero(e1).",
]
We can use the drc1.asp2vl
function to convert this chart specification from Draco 1’s ASP format into a Vega-Lite specification, the dictionary-based format Draco 1 supports.
drc1_spec_converted = drc1.asp2vl(drc1_asp)
md_json(drc1_spec_converted)
{
"$schema": "https://vega.github.io/schema/vega-lite/v5.json",
"data": {
"url": "data/cars.json"
},
"mark": "bar",
"encoding": {
"x": {
"type": "ordinal",
"field": "weather"
},
"y": {
"type": "quantitative",
"aggregate": "mean",
"field": "temp_max",
"scale": {
"zero": true
}
}
}
}
Note
Why does the output above have "url": "data/cars.json"
inside the data
attribute?
This shortcoming of Draco 1 is also caused by the fact that it does not have first-class Python support and data serialization is needed
every single time a function inside the drc1
module is invoked since it is running in a Node.js environment under the hood. Therefore, the data
attribute has this hard-coded placeholder, so that the actually rendered data does not need to do a roundtrip between the Python and the Node.js process.
Draco 2#
Just as in Draco 1, visualizations in Draco 2 can be expressed both as a set of logical facts and as a nested dictionary. However, in contrast to Draco 1, where there is a predefined set of functions which can be used to declare logical facts over a visualization such as mark
, encoding
, channel
, field
, type
, etc., Draco 2 uses a generic specification format. As the example below also demonstrates, only two functions are used: entity
and attribute
. Entity facts describe the existence of an object while attribute facts specify properties of entities. The default top-level object is called root
and can be referenced
without any prior declarations.
drc2_asp = [
# Set the `number_rows` attribute of the `root` entity to the number of rows in our dataset
f"attribute(number_rows,root,{df.shape[0]}).",
# Declare the existence of a top-level `field` entity with ASP identifier "temp_max"
"entity(field,root,temp_max).",
# ... set the `name` attribute of the `field` entity identified by `temp_max` to "temp_max"
"attribute((field,name),temp_max,temp_max).",
# ... set the `type` attribute of the `field` entity identified by `temp_max` to "number"
"attribute((field,type),temp_max,number).",
# Declare the existence of a top-level `field` entity with ASP identifier "weather"
"entity(field,root,weather).",
# ... set the `name` attribute of the `field` entity identified by `weather` to "weather"
"attribute((field,name),weather,weather).",
# ... set the `type` attribute of the `field` entity identified by `weather` to "string"
"attribute((field,type),weather,string).",
# Declare the existence of a top-level `view` entity with ASP identifier "v0"
"entity(view,root,v0).",
# ... set the `coordinates` attribute of the `view` entity identified by `v0` to "cartesian"
"attribute((view,coordinates),v0,cartesian).",
# Declare the existence of a `mark` entity with ASP identifier "v0", nested into the `v0` view entity
"entity(mark,v0,m0).",
# ... set the `type` attribute of the `mark` entity identified by `m0` to "bar"
"attribute((mark,type),m0,bar).",
# Declare the existence of an `encoding` entity with ASP identifier "e0", nested into the `m0` mark entity
"entity(encoding,m0,e0).",
# ... set the `channel` attribute of the `encoding` entity identified by `e0` to "x"
"attribute((encoding,channel),e0,x).",
# ... set the `field` attribute of the `encoding` entity identified by `e0` to "weather"
"attribute((encoding,field),e0,weather).",
# Declare the existence of an `encoding` entity with ASP identifier "e1", nested into the `m0` mark entity
"entity(encoding,m0,e1).",
# ... set the `channel` attribute of the `encoding` entity identified by `e1` to "y"
"attribute((encoding,channel),e1,y).",
# ... set the `field` attribute of the `encoding` entity identified by `e1` to "temp_max"
"attribute((encoding,field),e1,temp_max).",
# ... set the `aggregate` attribute of the `encoding` entity identified by `e1` to "mean"
"attribute((encoding,aggregate),e1,mean).",
# Declare the existence of a `scale` entity with ASP identifier "s0", nested into the `v0` view entity
"entity(scale,v0,s0).",
# ... set the `channel` attribute of the `scale` entity identified by `s0` to "x"
"attribute((scale,channel),s0,x).",
# ... set the `type` attribute of the `scale` entity identified by `s0` to "ordinal"
"attribute((scale,type),s0,ordinal).",
# Declare the existence of a `scale` entity with ASP identifier "s1", nested into the `v0` view entity
"entity(scale,v0,s1).",
# ... set the `channel` attribute of the `scale` entity identified by `s1` to "y"
"attribute((scale,channel),s1,y).",
# ... set the `type` attribute of the `scale` entity identified by `s1` to "linear"
"attribute((scale,type),s1,linear).",
# ... set the `zero` attribute of the `scale` entity identified by `s1` to "true"
"attribute((scale,zero),s1,true).",
]
We can use the drc2_facts_to_dict
function to convert this chart specification from Draco 1’s ASP format into a nested, dictionary-based format. When designing the API of Draco 2, we explicitly decided not to create a facts_to_dict
utility method, as it would hide too essential details of the conversion process from the user, making Draco 2’s core idea of using a constraint solver less transparent. In turn, we provide answer_set_to_dict
as a utility which takes a Clingo answer set and converts it to the desired nested dictionary format.
def drc2_facts_to_dict(facts: list[str]) -> dict:
run_result = next(drc2.run_clingo(facts)).answer_set
return dict(drc2.answer_set_to_dict(run_result))
drc2_spec_converted = drc2_facts_to_dict(drc2_asp)
md_json(drc2_spec_converted)
{
"number_rows": 1461,
"field": [
{
"name": "temp_max",
"type": "number"
},
{
"name": "weather",
"type": "string"
}
],
"view": [
{
"coordinates": "cartesian",
"mark": [
{
"type": "bar",
"encoding": [
{
"channel": "x",
"field": "weather"
},
{
"channel": "y",
"field": "temp_max",
"aggregate": "mean"
}
]
}
],
"scale": [
{
"channel": "x",
"type": "ordinal"
},
{
"channel": "y",
"type": "linear",
"zero": "true"
}
]
}
]
}
Note
The produced specification describes our desired chart perfectly, however, it is purposefully not directly compatible with Vega-Lite. Using Draco 2 specs, we can express more details about a visualization than Vega-Lite can, such as the coordinates
of a view, as the example above shows. To make Draco 2 specifications easy to display, we implemented our own rendering engine with Vega-Lite support.
Rendering Draco 2 Specifications using Vega-Lite#
from draco.renderer import AltairRenderer
AltairRenderer().render(drc2_spec_converted, df)
Extending Draco 2’s Specification Language#
If we would like to express more knowledge about visualizations, we can extend the provided language to describe more details about visualizations and a richer context around them. Based on the needs of a user, one can either augment an existing entity with additional attributes, or they can declare an entirely new entity and attach any number of custom attributes to it.
Let’s assume that we would like to set a constraint on the maximum allowed size of a visualization. In a dictionary-based format one would intuitively try something like:
{
"size": {
"maximumWidth": 300,
"maximumHeight": 400
},
...
}
We can do the same thing just as intuitively using Draco 2: we just need to declare a top-level size
entity with the maximumWidth
and maximumHeight
attributes.
# reusing the above-defined specification
drc2_asp_extended = drc2_asp + [
# Declare the existence of a top-level `size` entity with ASP identifier "my_size"
"entity(size,root,my_size).",
# ... set the `maximumWidth` attribute of the `size` entity identified by `my_size` to "300"
"attribute((size,maximumWidth),my_size,300).",
# ... set the `maximumHeight` attribute of the `size` entity identified by `my_size` to "400"
"attribute((size,maximumHeight),my_size,400).",
]
drc2_spec_extended_converted = drc2_facts_to_dict(drc2_asp_extended)
md_json(drc2_spec_extended_converted)
{
"number_rows": 1461,
"field": [
{
"name": "temp_max",
"type": "number"
},
{
"name": "weather",
"type": "string"
}
],
"view": [
{
"coordinates": "cartesian",
"mark": [
{
"type": "bar",
"encoding": [
{
"channel": "x",
"field": "weather"
},
{
"channel": "y",
"field": "temp_max",
"aggregate": "mean"
}
]
}
],
"scale": [
{
"channel": "x",
"type": "ordinal"
},
{
"channel": "y",
"type": "linear",
"zero": "true"
}
]
}
],
"size": [
{
"maximumWidth": 300,
"maximumHeight": 400
}
]
}
Note
Whenever extending the specification language with new elements, the underlying ASP constraints also need to be updated so that the new entities and attributes are taken into consideration when generating visualization recommendations. Renderers also need to be extended to process the customizations as desired.
Behavior Comparison#
Another essential aspect to compare between Draco 1 and Draco 2 is the behavior of the two systems. Below we compare the behavior of the two systems by analyzing how they rank pairs of visualizations from the dataset compiled by Kim and Heer [KH18].
The actual ranking process involves evaluating a supplied visualization specification against the hard and soft constraints of Draco 1 and Draco 2 using their respective APIs, then observing the cost assigned to the given specification. We conduct this process for 100
pairs of visualizations in the dataset and analyze whether Draco 1 and Draco 2 rank the pairs in the same order.
We are using the default constraint weights in the case of both systems, which were hand-picked, and not learned using the provided weight-learning modules.
import random
from pathlib import Path
# Used for random sampling of pairs, arbitrary seed pinned for reproducibility
random.seed(7942)
NUM_PAIRS = 100
corpus: list[dict] = json.loads(Path("./data/kim2018_draco2.json").read_text())
# List of visualization pairs represented as tuples
specs_sampled = [
(dct["positive"], dct["negative"]) for dct in random.sample(corpus, NUM_PAIRS)
]
As we loaded the visualizations in Draco 2’s specification format, we need a way to programmatically convert them to that of Draco 1’s, so that we can evaluate visualization specifications using the API of Draco 1. To do so, we use Draco 2 to convert a specification to Vega-Lite, which is directly compatible with Draco 1.
def draco2_to_draco1(spec: list[str]) -> list[str]:
"""
Converts a Draco 2 specification to Draco 1 format.
:param spec: A list of ASP facts representing a Draco 2 specification
:return: A list of ASP facts representing a Draco 1 specification
"""
# We don't want to include non-Vega-Lite-specific attributes in the conversion
to_exclude = ["unqiue"]
spec_normalized = [
fact for fact in spec if not any(excl in fact for excl in to_exclude)
]
spec_dict_drc2 = drc2_facts_to_dict(spec_normalized)
vl = AltairRenderer().render(spec_dict_drc2, pd.DataFrame()).to_dict()
# Draco 1 expects a string for the mark attribute, not an object
if isinstance(vl.get("mark", None), dict):
vl["mark"] = vl.pop("mark")["type"]
return drc1.vl2asp(vl)
We define utility functions to evaluate pairs of visualizations using Draco 1 and Draco 2. For each visualization specification we compute the cost assigned to it by Draco 1 and Draco 2, as well as the preference violations detected by the two systems.
# Disable logs coming from the spawned Draco 1 Node.js subprocess
import logging
logging.disable(logging.CRITICAL)
def process_results(df: pd.DataFrame) -> pd.DataFrame:
"""
Post-processes the comparison results, supplied as a `DataFrame`.
Pivots the table to merge row-pairs of different Draco versions into a single row.
Also computes the difference in ranking between the two Draco versions.
:param df: A `DataFrame` containing the raw comparison results
:return: A `DataFrame` containing the post-processed comparison results
"""
df_without_violations = df.drop(columns=["violations_a", "violations_b"])
df_pivot = df_without_violations.pivot_table(
index="pair_idx", columns="draco_version"
)
df_pivot.columns = [
"_drc".join(map(str, col)).strip() for col in df_pivot.columns.values
]
df_pivot["ranking_is_different"] = (
df_pivot["better_ranked_drc1"] != df_pivot["better_ranked_drc2"]
).astype(int)
return df_pivot
def analyze_evaluations(specs: list[tuple[list[str], list[str]]]) -> pd.DataFrame:
"""
Evaluates the supplied pairs of visualizations using Draco 1 and Draco 2.
Collects the results in a `DataFrame` and returns it.
:param specs: A list of pairs of visualizations in Draco 2 format
:return: A `DataFrame` containing the evaluation results
"""
rows: list[dict] = []
for pair_idx, (spec_a, spec_b) in enumerate(specs):
rows.append(compute_row(pair_idx, spec_a, spec_b, draco_version=1))
rows.append(compute_row(pair_idx, spec_a, spec_b, draco_version=2))
return pd.DataFrame(rows)
def compute_row(
pair_idx: int, spec_a: list[str], spec_b: list[str], draco_version: int
) -> dict:
cost_a, violations_a = evaluate_spec(spec_a, draco_version)
cost_b, violations_b = evaluate_spec(spec_b, draco_version)
better_ranked = 0 if cost_a < cost_b else 1
return {
"pair_idx": pair_idx,
"cost_a": cost_a,
"violations_a": violations_a,
"cost_b": cost_b,
"violations_b": violations_b,
"better_ranked": better_ranked,
"draco_version": draco_version,
}
def evaluate_spec(spec: list[str], draco_version: int) -> tuple[int, dict[str, int]]:
assert draco_version in {1, 2}
return evaluate_spec_drc1(spec) if draco_version == 1 else evaluate_spec_drc2(spec)
def evaluate_spec_drc2(spec: list[str]) -> tuple[int, dict[str, int]]:
drc2_inst = drc2.Draco()
model = next(drc2_inst.complete_spec(spec))
return model.cost[0], dict(drc2_inst.count_preferences(str(model)))
def evaluate_spec_drc1(spec: list[str]) -> tuple[int, dict[str, int]]:
asp = draco2_to_draco1(spec)
result = drc1.run(asp, relax_hard=True, silence_warnings=True)
return result.cost, dict(result.violations)
Analyzing How the Two Draco Versions Rank Pairs of Visualizations#
Having the dataset of visualization specifications loaded and the utilities defined, we can now evaluate the pairs of visualizations using Draco 1 and Draco 2, collect the results into a DataFrame
, then process it for further analysis.
df_raw = analyze_evaluations(specs=specs_sampled)
df = process_results(df_raw)
df.head()
better_ranked_drc1 | better_ranked_drc2 | cost_a_drc1 | cost_a_drc2 | cost_b_drc1 | cost_b_drc2 | ranking_is_different | |
---|---|---|---|---|---|---|---|
pair_idx | |||||||
0 | 1.0 | 1.0 | 29.0 | 34.0 | 20.0 | 27.0 | 0 |
1 | 0.0 | 0.0 | 20.0 | 27.0 | 29.0 | 36.0 | 0 |
2 | 1.0 | 0.0 | 30.0 | 33.0 | 30.0 | 36.0 | 1 |
3 | 0.0 | 0.0 | 20.0 | 27.0 | 29.0 | 36.0 | 0 |
4 | 0.0 | 0.0 | 20.0 | 25.0 | 29.0 | 36.0 | 0 |
alt.Chart(df).transform_calculate(
behavior='datum.ranking_is_different == 0 ? "Same" : "Different"'
).mark_bar().encode(
alt.X(
"behavior:N",
title="Ranking Behavior",
scale=alt.Scale(domain=["Same", "Different"]),
),
alt.Y("count()", title="Count", scale=alt.Scale(domain=[0, 100])),
tooltip=[
alt.Tooltip("behavior:N", title="Ranking Difference"),
alt.Tooltip("count()", title="Count"),
],
)
We can see that Draco 1 and Draco 2 agree on the ranking of visualization pairs 86 out of 100 times, indicating a high similarity in the two systems’ behavior. This similarity can be explained by the fact the both systems use ASP to formalize knowledge about visualization design and the underlying hard and soft constraints as well as the associated weights are similar.
We look into the 14 pairs of visualizations for which the two systems disagree on the ranking below.
Note about the used dataset
Throughout this analysis we also keep in mind that the dataset of visualization specifications we used describe pairs of visualizations from a perceptual study, where participants were asked to compare the two visualizations and indicate which one they prefer. In the DataFrame
we compiled for analysis, if the better_ranked
column is 0
, it means that the system prefers the visualization which was also deemed better by the participants in the perceptual study. If the better_ranked
column is 1
, it means that the system prefers the visualization which was deemed worse by the study participants.
different_pairs = df[df["ranking_is_different"] == 1].index
df_different = df_raw[df_raw["pair_idx"].isin(different_pairs)]
alt.Chart(df_different).transform_calculate(
better_ranked_label='datum.better_ranked == 0 ? "Positive" : "Negative"'
).mark_bar().encode(
x=alt.X("better_ranked_label:N", title="Better Ranked Vis"),
y=alt.Y("count()", title="Count of Visualization Pairs"),
column=alt.Column(
"draco_version:N",
header=alt.Header(labelExpr='"Draco " + datum.value'),
title="System Version",
),
tooltip=[
alt.Tooltip("better_ranked_label:N", title="Better Ranked Vis"),
alt.Tooltip("count()", title="Count of Visualization Pairs"),
alt.Tooltip("draco_version:N", title="System Version"),
],
).properties(
width=100,
)
We can see that in the case when Draco 1 and Draco 2 disagree on the ranking of the same visualization pairs, 14 out of 14 times Draco 2 prefers the visualization which was also deemed better by the participants in the perceptual study (labeled as Positive
, while Draco 1 favors the Negative
one. This indicates that Draco 2 is more in line with the results of the human perception study conducted by Kim and Heer.
All in all, we can conclude that Draco 1 and Draco 2 are very similar in their behavior when it comes to evaluating visualization specifications. Any minor dissimilarities can be explained by the fact that Draco 2 uses a different set of hard constraints, soft constraints and weights than Draco 1.
Conclusion#
The comparison between Draco 1 and Draco 2 reveals that both systems share a core idea of using constraints based on Answer Set Programming (ASP) to represent design knowledge about effective visualization designs. However, Draco 2, a complete rewrite of Draco 1, offers various improvements and new features.
Draco 2 is written entirely in Python, which provides a more seamless integration with the Python ecosystem and eliminates the need for non-Python dependencies. This makes Draco 2 easier to use compared to Draco 1, which is written in TypeScript and requires a Node.js runtime.
In terms of API implementation, both versions offer similar features, but Draco 2 has some additional capabilities, such as compatibility with Altair, a RESTful interface, and a recommendation & constraint weight debugging feature.
The visualization specification languages of the two versions are quite different. While Draco 1 uses a language based entirely on Vega-Lite, Draco 2 uses a more flexible and extensible language that can express more details about a visualization using a generic nested format, built around the idea of entities and attributes.
When comparing the behavior of the two systems, it was found that Draco 1 and Draco 2 agree on the ranking of visualization pairs 86 out of 100 times, indicating a high similarity in their behavior. However, in cases where they disagreed, Draco 2 consistently aligned more with the results of a human perception study.
In conclusion, while Draco 1 and Draco 2 share a common foundation, Draco 2 offers several improvements in terms of ease of use, flexibility and extensibility while exhibiting the behavior of its predecessor.