TITLE: | Analysis of TMRM Use Cases |
SOURCE: | Mr. Steve Pepper; Mr. Lars Marius Garshol |
PROJECT: | WD 13250-5: Information Technology - Document Description and Processing Languages, Topic Maps - Reference Model |
PROJECT EDITOR: | Mr. Patrick Durusau; Dr. Steven R. Newcomb |
STATUS: | Personal contribution |
ACTION: | For review and comment |
DATE: | 2004-04-07 |
DISTRIBUTION: | SC34 and Liaisons |
REFER TO: | N0490 - 2004-03-15 - Topic Maps -- Reference Model Use Cases N0423 - 2003-05-07 - Recommendations of May 2003 Meeting of WG3 N0460 - 2003-12-02 - Topic Maps - Reference Model, revision 3.10 |
REPLY TO: |
Dr. James David Mason (ISO/IEC JTC 1/SC 34 Secretariat - Standards Council of Canada) Crane Softwrights Ltd. Box 266, Kars, ON K0A-2E0 CANADA Telephone: +1 613 489-0999 Facsimile: +1 613 489-0995 Network: jtc1sc34@scc.ca http://www.jtc1sc34.org |
In N423 (Recommendations of May 2003 Meeting of WG3) the editors of the Topic Maps Reference Model (TMRM) were instructed to prepare a user requirements document for the TMRM. The goal was to enable WG3 to arrive at a common understanding of the purpose of the TMRM, in order to better assess the relevance and value of the drafts of the TMRM that have so far been presented.
N490 (Topic Maps -- Reference Model Use Cases) is the editors' response. This document is a very valuable contribution to the discussion and we thank the authors for the effort that they have put into it. Now that the editors of the TMRM have described the kinds of business problems that the TMRM is intended to address, it becomes much easier to evaluate the need for such a reference model and determine the extent to which the current draft of the TMRM (N460) meets that need.
Our first reaction upon reading N490 was that it did not appear to contain any requirements that could not be met by the Topic Maps Data Model and a query language such as TMQL (Topic Maps Query Language, ISO 18048). If that is the case, then N490 in and of itself cannot be regarded as justification for the TMRM: The purpose of a Use Cases document is to identify User Requirements that justify the development of a new standard. If the User Requirements revealed by the Use Cases can already be met by existing or planned standards, there is no justification for a new standard.
This paper was written in order to test our initial hypothesis by evaluating each use case in N490 in order to determine whether or not it can be satisfied using existing or planned standards (i.e., the TMDM, TMQL and/or TMCL). It does not attempt to assess the extent to which the current draft of the TMRM meets the requirements identified through the use cases in N490.
Our primary conclusions are as follows:
It is important to note that our conclusion is that the current use cases document (N490) does not containment requirements that justify further work on the Reference Model. That does not mean that such requirements do not exist. We continue to believe that there may be a use – at least in the future – for a theoretical model that provides some kind of philosophical underpinning for concepts that are common to Topic Maps, RDF, and other knowledge organization paradigms. However, those requirements must be clearly identified before work on the model itself can continue because, unless we know its intended purpose (in terms of concrete use cases that cannot be satisfied by other means), it is not possible to evaluate the suitability of any particular model.
Most importantly, it would be wrong to allow a standard for which no clear User Requirements yet exist to delay the development of standards that are urgently required by existing users of Topic Maps.
(Note: This document contains a number of example topic maps that for reasons of brevity are expressed using LTM (Linear Topic Maps notation), although they could equally well have been expressed in XTM. It also includes a number of queries that are given here in the tolog query language because there is as yet no standard Topic Maps Query Language (TMQL). Those queries could equally well have been expressed in TMPath, AsTMa? or ToMa.)
This document focuses on the actual use cases given in N490. It does not consider section 1 (No Abstract Model of Topic Maps) to be a use case. The use cases will be referred to as follows:
Many of these use cases can be reduced to the same basic requirement, namely the ability to determine – on the basis of a complex set of interrelated properties – that two topics represent (or may represent) the same subject. Since UC1 illustrates this requirement, most attention is focused on analyzing that particular use case.
This use case can be reduced to the following problem:
How to discover topics that ostensibly represent different subjects but could actually represent the same subject, based on the following criteria:
|
Annex 1 contains a topic map illustrating this scenario. It contains:
In order to detect possible frauds, all that is necessary is a query which returns pairs of claimants that fulfill criteria (1) to (4), as follows:
select $T1, $T2 from patient-of($T1 : patient, $P : physician), patient-of($T2 : patient, $P : physician), located-in($T1 : person, $L : place), located-in($T2 : person, $L : place), located-in($P : person, $L : place), has-claim($T1 : claimant, $C : claimtype), has-claim($T2 : claimant, $C : claimtype), $T1 /= $T2, ssnum($T1, $S1), ssnum($T2, $S2), $S1 /= $S2 order by $T1?
$T1 and $T2 are two topics that play the role of patient in 'patient-of' associations with the same physician ($P). The query ensures that all three of these are 'located-in' the same location ($L), that $T1 and $T2 have put in the same type of claim ($C), that they are not the same topic, and that they do not have the same social security number.
The query can be executed against the topic map in Annex 1 and will result in a two-column table where each row contains a pair of suspected frauds. (This can be tested in the latest version of the Omnigator. Note that the example includes two distinct topics with the same social security numbers. This was done deliberately in order to follow the use case description to the letter.)
The conclusion is that a query language implemented on top of the TMDM is sufficient to solve the challenge of UC1 and that UC1 does not justify the need for a reference model. TMQL will provide a highly expressive and declarative language that is both human-readable and machine-processable. What better way could there be of documenting and disclosing merging rules?
This example also raises the question what is actually meant by "merging" in N490. Are we really, in this use case, interested in merging the suspected claimants, or do we simply want to use the topic map as a basis for allowing fraud investigators to get answers to complex questions? Whatever the answer, we believe that a standard query language is the most appropriate solution.
In other words, UC1 actually underlines the need for TMQL and does nothing to justify the TMRM.
The second use case is very similar to the first. The challenge is to establish identity based on a set of interrelated properties, in this case the following:
|
Latitude and longitude are most appropriately modelled as internal occurrences of a topic, thus:
<topic id="tokyo"> <baseName><baseNameString>Tokyo</baseNameString></baseName> <occurrence> <instanceOf><topicRef xlink:href="#latitude"/> <resourceData>35 40 N</resourceData> </occurrence> <occurrence> <instanceOf><topicRef xlink:href="#longitude"/> <resourceData>139 45 E</resourceData> </occurrence> </topic>
Annex 2 contains a topic map illustrating this scenario using GABx and GABy coordinates as an alternative to traditional latitude and longitude units. It key characteristics are:
The data set is as follows:
Topic | Short name | GABx | GABy |
---|---|---|---|
Bar 1 | Bar | -23440 | 6736321 |
Bar 2 | Bar | -23440 | 6736321 |
Bar 3 | Bar | -29873 | 6730858 |
Foo | null | -29873 | 6730858 |
Based on the information in this topic map the USGS can define a merging rule in the form of a query, as follows:
basename($T, $BN) :- topic-name($T, $N), value($N, $BN). select $T1, $T2 from basename($T1, $BN), gabx($T1, $X), gaby($T1, $Y), basename($T2, $BN), gabx($T2, $X), gaby($T2, $Y), $T1 /= $T2?
The query first defines a predicate to use as an inference rule ("basename"). This is done in order to simplify the main query which selects two topics, $T1 and $T2, that have the same name ($BN), the same GABx coordinate ($X), and the same GABy coordinate ($Y). The result is a list of pairs of topics that the USGS can choose to merge, as follows:
T1 | T2 |
---|---|
Bar 1 | Bar 2 |
Bar 2 | Bar 1 |
Admittedly this solution does not satisfy every detail of UC2 because of a sting in the tail of the use case: The latitude and longitude should be the same "within some tolerance". (For example, if we assume that tolerance to be 10,000 units, then Bar 3 should be seen as representing the same subject as Bar 1 and Bar 2.) However this is just a shortcoming of the query language we have used: At present, tolog does not have the ability to use arbitrary functions. A design for such an extension does exist, however, and based on that design a complete solution to UC2 might look as follows:
import "math.tl" as func close-to($T1, $T2) :- gabx($T1, $X1), value($X1, $x1), gaby($T1, $Y1), value($Y1, $y1), gabx($T2, $X2), value($X2, $x2), gaby($T2, $Y2), value($Y2, $y2), func:less-than( func:abs($x1 - $x2), 10000), func:less-than( func:abs($y1 - $y2), 10000). same-name($T1, $T2) :- topic-name($T1, $N1), value($N1, $BN), topic-name($T2, $N2), value($N1, $BN). select $T1, $T2 from same-name($T1, $T2), close-to($T1, $T2), $T1 /= $T2?
Here two inference rule predicates are defined ("close-to" and "same-name"), allowing the basic query to be expressed very succintly (the four last lines). The "close-to" predicate uses two imported predicates ("less-than" and "abs") in such a way that "true" is returned if the pairs of GABx and GABy coordinates are within 10,000 units of each other.
Thus UC2 can also be satisfied using TMDM and TMQL – assuming TMQL somehow meets the requirement for extensibility through functions that this use case reveals. (In fact this is already one of the TMQL requirements.) Inasmuch as it can be satisfied using TMDM and TMQL, UC2 is therefore not a sufficient justification for a separate reference model.
This use case is somewhat different than the two preceding use cases.
WG3's original intent with the Reference Model (as described in N278) was to "define a reference model for topic maps, which can be used to define the relationships to other knowledge representations", and it was generally envisioned that one of those knowledge representations – perhaps the most important in this context – would be RDF. UC3 is the only use case in N490 that actually addresses that original intent; all the others focus on a different issue, namely the documentation of merging rules, which was first articulated more recently.
UC3 is therefore of particular interest. However it presupposes the existence of the very specification (the TMRM) that it is supposed to justify (see "4.1.2 Preconditions"). In our opinion it would be more useful if it expressed its user requirement(s) without reference to some proposed solution.
For that reason we cannot treat this use case in quite the same way as the other use cases. We have to attempt to abstract away from the "solution" that is built in to the problem description and find the real underlying use case. Fortunately this is not too difficult. We believe the use case can be summed up as follows:
There exists two sets of information described by Topic Maps and RDF respectively. The user wishes to query across the two information sets without having to know or care that they are described by two different knowledge representations, or which pieces of information are in which information set. |
This is a very real use case and one that is particularly important to satisfy. Although it is hard to judge the actual uptake of RDF, there is no doubt that many users and potential users of Topic Maps are concerned that there should be interoperability between Topic Maps and RDF. The question is, how is this best achieved, and is a reference model required in order to do so? To answer this question – and thereby judge whether UC3 provides justification for the TMRM, it is necessary to take a step back:
Topic Maps and RDF have different underlying models – of that there is surely no doubt. It should also be reasonably clear that it is not possible to query directly across two data sets that conform to different models. In general, the data sets need to be made to conform to a single data model first, and then they can be queried at the same time. This would appear to be the approach taken by UC3 using the TMRM: Represent both the EU Topic Maps data and the UK RDF data in accordance with the TMRM and then query them on the basis of that model.
This would be a reasonable approach, unless it could be shown that one (or both) of the models could be represented in terms of the other, in which case it would be both unnecessary and wasteful of resources to devise a third model (the TMRM).
We believe this to be the case here. It has been amply demonstrated, both in theory and in practice, that RDF models can be represented as Topic Maps and vice versa, without loss of information. Lars Marius Garshol's paper Living with topic maps and RDF describes one possible approach based on the TMDM; the same basic approach has seen at least three independent implementations (by Ontopia, Empolis, and the University of Bologna).
Annex 3 contains a topic map, an RDF document, and a tolog query that demonstrates one of these implementations. (Note that the RDF model also contains mappings which can be thought of as corresponding to the "X-ref ontology" described in UC3).
The data sets are as follows:
Author(s) | Document |
---|---|
European Parliament document register (TM-based) | |
Lars Marius Garshol | Metadata? Thesauri? Taxonomies? Topic Maps! |
Lars Marius Garshol | Living with Topic Maps and RDF |
Steve Pepper | The TAO of Topic Maps |
UK Parliament document register (RDF-based) | |
Lars Marius Garshol Steve Pepper |
The XML Papers |
The two data sets can be loaded into the Omnigator and then queried either separately or in a merged form using the following query:
using dc for i"http://purl.org/dc/elements/1.1/" using onto for i"http://psi.ontopia.net/" using person for i"http://psi.ontopia.net/person/" dc:creator( $DOCUMENT : onto:document, person:larsga : onto:author )?
This demonstrates that RDF can be represented in terms of the TMDM. Others have shown that topic maps can be represented in terms of RDF. It follows that UC3 can be satisfied without the need for a "reference model". (Of coures, such a reference model could be justified if it were the case that there existed other models, in addition to the Topic Maps Data Model and RDF, that could not be represented in terms of the TMDM but so far no evidence of such models exists. The relational model, for example, is easily represented in terms of the TMDM.) Thus UC3, while being a valid use case in itself, does not provide any justification for the TMRM.
UC4 focuses on two issues:
Neither of these issues presents a problem from the point of view of the Topic Maps Data Model.
The TMDM, like XML, is based on Unicode (which is a character set not, as stated in N490, a character encoding). Unicode supports a wide range of encodings, including Shift_JIS, EUC-JP, and KS C 5601-1992. Queries across multiple data sets that use different encodings would be performed by transforming the data concerned into a single, uniform encoding, such as UTF-8. (Alternatively, depending on the application environment, the query itself might be transformed into the local encodings before being executed.)
Thus the Widget Corporation of UC4 might have two databases containing price and ordering information respectively about their products. One database might use the ISO 8859-1 encoding and the other UTF-8, as follows:
Product table in price database
recno | prodname | price |
---|---|---|
... | ... | ... |
14579 | Compière | 189.00 |
... | ... | ... |
Product table in order database
record_number | product_name | description |
---|---|---|
... | ... | ... |
758439 | Compière | Some description... |
... | ... | ... |
These databases might be reflected directly into two corresponding topic maps, as follows:
Products and prices in topic map form
@"iso-8859-1" [product = "Product"] [price = "Price"] [pdb14579 : product = "Compière"] {pdb14579, price, [[189.00]]}
Products and descriptions in topic map form
@"utf-8" [product = "Product"] [description = "Description"] [odb758439 : product = "Compière"] {odb758439, description, [[Some description]]}
Any topic map system that implements the TMDM could load these two topic maps and merge them (using name-based merging). The result would be that the two topics would be merged, despite the fact that their names are encoded differently. This can be demonstrated in the Omnigator.
Thus the fact that data is stored in multiple character encodings does not prohibit it from being used for merging under the TMDM. This aspect of UC4 does not therefore justify the existence of a separate reference model.
The second issue raised by UC4 is "determining subject identity based upon actual data and not based upon pointers to that information". This use case is actually no different in principle from UC2, where the GABx and GABy values are "actual data" that would reside in a database. In UC2 a TMQL query expresses the merging rule as a complex set of interrelated properties; the same approach could be used in UC4.
Alternatively, if it were the case that Widget Corporation simply wished to use, say, a product number (e.g., "X1234-6a") as the basis for merging, the task could be performed without the use of TMQL, in one of two ways:
Both of these approaches allow subject identity to be determined based upon actual data using the TMDM, so once again UC4 cannot be said to justify the development of an additional model.
This use case introduces no new requirements to those already covered in UC2. Soundex matching can be performed using a query that includes a soundex function used against the names of both passengers and suspected terrorists. Using the proposed extension to tolog, such a query might look as follows:
import "string.tl" as str similar-name($T1, $T2) :- topic-name($T1, $N1), value($N1, $V1), topic-name($T2, $N2), value($N2, $V2), str:soundex($V1, $SOUNDEX), str:soundex($V2, $SOUNDEX). select $T1, $T2 from instance-of($T1, passenger), instance-of($T2, suspect), similar-name($T1, $T2), other-criteria($T1, $T2)?
(Note: The 'other-criteria' predicate is a place-holder for the "other criteria not disclosed" that is specified in the use case.)
Once again, the use case can be satisfied using the current TMDM and TMQL.
UC6 is identical to UC1, but with the additional requirement to be able to "disclose" the rules used for merging topics, in order to provide a basis for evaluating the capabilities of some piece of topic maps software.
The term "disclosure" is not defined. We assume that it means the action of revealing or communicating merging rules in some well-documented fashion that can be (1) interpreted by humans and (2) used to test topic maps software. Thus the ability to document merging rules in a formal, standardized manner seems to be the real underlying requirement of this use case.
It was demonstrated above that the "merging" required by UC1 (and thus also UC6) could be achieved using the TMDM and TMQL. If that is indeed the case, then the TMQL query used to express the actual merging rule must surely constitute the best conceivable form of documentation for that merging rule:
In other words, the TMQL (and the TMDM on which it is based) are sufficient to satisfy the requirements of this use case. The use case therefore does not justify the existence of a separate reference model.
The primary consideration underlying UC6 is probably to preserve the value of the customer's investment in Topic Maps and to provide maximum portability across different implementations of the standard. This is of course an extremely important goal to which everyone in WG3 presumably subscribes. Our position is that that goal is best met by delivering a standard query language that is based on a single data model and that is both highly expressive, formal, and well-documented. The use case therefore underlines the need for immediate progress on both the TMDM and TMQL.
UC7 describes a certain application scenario in general terms and also using two specific examples of such a scenario. The essence of the use case seems to be the following:
Given agreement on information sharing between two organizations, how can a user of an application inside one organization request and receive highly specific information from the other organization? |
The assumption is that the two organizations have agreed upon a common model (precondition 2 in section 8.1.3 Preconditions); we assume this to mean that they are using Topic Maps as defined in ISO 13250. (If some other model has been chosen, it is hard to see what relevance the use case has to ISO 13250.)
One important prerequisite for achieving the use case goal is for people and applications in the two organizations to "know" when they are talking about the same subject. This is the purpose for which the concept of Published Subjects was invented; that concept is both described and used in the TMDM. We assume therefore that in addition to agreeing to use the Topic Maps Data Model, the organizations in question have aligned their ontologies and the identifiers they use for the subjects with which their information is concerned.
Given the above assumptions, the only additional requirements that are necessary in order to meet UC7 are:
The first of these requirements points, once again, to TMQL. The second underlines the need for a Remote Access Protocol for Topic Maps, as was discussed at the Philadelphia meeting of WG3. (A first cut at such a protocol, called TMRAP, is being prepared for presentation at the Amsterdam meeting of WG3.) Nothing in the use case indicates the need for a separate "reference model" in addition to TMDM, TMQL and TMRAP.
This paper has attempted to understand and evaluate the use cases in N490 in order to determine whether or not they provide sufficient justification for a Topic Maps Reference Model that is separate from the Topic Maps Data Model currently at Committee Draft stage as ISO 13250-2.
The conclusion is that every one of the use cases can be satisfied either by the data model alone, by the data model in combination with a topic maps query language, or by a combination of data model, topic maps query language, and remote access protocol.
From this we conclude that the use cases in N490 are insufficient grounds for the development of a separate reference model.
It may be that there are other grounds, as yet unarticulated, for such a reference model. If so, convincing use cases should be brought forward as soon as possible, otherwise it is hard to see why WG3 should continue to devote precious resources to this work. It is even harder to see why WG3 should delay progress on TMDM, TMQL, and TMCL for a model that still lacks any convincing justification.
This topic map (in LTM syntax) and the accompanying queries (in tolog syntax) are used to illustrate Use Case #1 in N490.
@"iso-8859-1" /* === TOPIC TYPES ========================================================== */ /* --- Claimant ------------------------------------------------------------- */ [claimant = "Claimant"] [grove : claimant = "Geir Ove Grønmo"] [larsga : claimant = "Lars Marius Garshol"] [pepper : claimant = "Steve Pepper"] [sylvia : claimant = "Sylvia Schwab"] [pam : claimant = "Pamela Gennusa"] [pam2 : claimant = "Pamela L. Gennusa"] [gra : claimant = "Graham Moore"] [naito : claimant = "Motomu Naito"] /* --- Claimtype ------------------------------------------------------------ */ [claimtype = "Claimtype"] [claimtype1 : claimtype = "Claim type 1"] [claimtype2 : claimtype = "Claim type 2"] [claimtype3 : claimtype = "Claim type 3"] /* --- Physician ------------------------------------------------------------ */ [physician = "Physician"] [olsen : physician = "Dr. Olsen"] [hansen : physician = "Dr. Hansen"] [smith : physician = "Dr. Smith"] [tanaka : physician = "Dr. Tanaka"] /* --- Location ------------------------------------------------------------- */ [location = "Location"] [oslo : location = "Oslo"] [cambridge : location = "Cambridge"] [tokyo : location = "Tokyo"] /* === OCCURRENCE TYPES ===================================================== */ [ssnum = "Social Security Number"] {grove, ssnum, [[4592351098324198924]]} {larsga, ssnum, [[1028374618273418324]]} {pepper, ssnum, [[8263493485049548049]]} {sylvia, ssnum, [[3830495629487561823]]} {pam, ssnum, [[6895089129471456938]]} {pam2, ssnum, [[6895089129471456938]]} {gra, ssnum, [[7039578134750931786]]} {naito, ssnum, [[5124351893509134591]]} /* === ASSOCIATION TYPES ==================================================== */ /* --- Located in ----------------------------------------------------------- */ [located-in = "Located in" = "Location of" /place] [person = "Person"] [place = "Place"] located-in( grove : person, oslo : place ) located-in( larsga : person, oslo : place ) located-in( pepper : person, oslo : place ) located-in( sylvia : person, oslo : place ) located-in( pam : person, cambridge : place ) located-in( pam2 : person, cambridge : place ) located-in( gra : person, cambridge : place ) located-in( naito : person, tokyo : place ) located-in( olsen : person, oslo : place ) located-in( hansen : person, oslo : place ) located-in( smith : person, cambridge : place ) located-in( tanaka : person, tokyo : place ) /* --- Patient of ---------------------------------------------------------- */ [patient-of = "Patient of" = "Has patient" /physician] [patient = "Patient"] patient-of( grove : patient, olsen : physician ) patient-of( larsga : patient, hansen : physician ) patient-of( pepper : patient, hansen : physician ) patient-of( sylvia : patient, hansen : physician ) patient-of( gra : patient, olsen : physician ) patient-of( pam : patient, smith : physician ) patient-of( pam2 : patient, smith : physician ) patient-of( naito : patient, tanaka : physician ) /* --- Has claim ----------------------------------------------------------- */ [has-claim = "Has claimant" = "Claimants" /claimtype] has-claim( grove : claimant, claimtype1 : claimtype ) has-claim( larsga : claimant, claimtype2 : claimtype ) has-claim( pepper : claimant, claimtype2 : claimtype ) has-claim( sylvia : claimant, claimtype3 : claimtype ) has-claim( pam : claimant, claimtype1 : claimtype ) has-claim( pam2 : claimant, claimtype1 : claimtype ) has-claim( gra : claimant, claimtype2 : claimtype ) has-claim( naito : claimant, claimtype3 : claimtype ) /* ========================================================================= Useful Queries --- DATA OVERVIEW: select $T, $L, $P, $C, $SS from located-in($T : person, $L : place), patient-of($T : patient, $P : physician), has-claim($T : claimant, $C : claimtype), ssnum($T, $SS)? (1) People with the same physician select $T1, $T2 from patient-of($T1 : patient, $P : physician), patient-of($T2 : patient, $P : physician), $T1 /= $T2 order by $T1? (2) ... as (1) + all are located in the same place select $T1, $T2 from patient-of($T1 : patient, $P : physician), patient-of($T2 : patient, $P : physician), located-in($T1 : person, $L : place), located-in($T2 : person, $L : place), located-in($P : person, $L : place), $T1 /= $T2 order by $T1? (3) ... as (2) + same claim type select $T1, $T2 from patient-of($T1 : patient, $P : physician), patient-of($T2 : patient, $P : physician), located-in($T1 : person, $L : place), located-in($T2 : person, $L : place), located-in($P : person, $L : place), has-claim($T1 : claimant, $C : claimtype), has-claim($T2 : claimant, $C : claimtype), $T1 /= $T2 order by $T1? (4) ... as (3) + different SS numbers select $T1, $T2 from patient-of($T1 : patient, $P : physician), patient-of($T2 : patient, $P : physician), located-in($T1 : person, $L : place), located-in($T2 : person, $L : place), located-in($P : person, $L : place), has-claim($T1 : claimant, $C : claimtype), has-claim($T2 : claimant, $C : claimtype), $T1 /= $T2, ssnum($T1, $S1), ssnum($T2, $S2), $S1 /= $S2 order by $T1? ========================================================================= */
This topic map (in LTM syntax) and the accompanying queries (in tolog syntax) are used to illustrate Use Case #2 in N490.
[location = "Location"] [shortname = "Short name"] [gabx = "GABx"] [gaby = "GABy"] [loc1 : location = "Bar 1" = "Bar" /shortname] [loc2 : location = "Bar 2" = "Bar" /shortname] [loc3 : location = "Bar 3" = "Bar" /shortname] [loc4 : location = "Foo"] {loc1, gabx, [[-23440]]} {loc1, gaby, [[6736321]]} {loc2, gabx, [[-23440]]} {loc2, gaby, [[6736321]]} {loc3, gabx, [[-29873]]} {loc3, gaby, [[6730858]]} {loc4, gabx, [[-29873]]} {loc4, gaby, [[6730858]]} /* ========================================================================= Useful Queries --- DATA OVERVIEW: select $T, $SN, $X, $Y from gabx($T, $X), gaby($T, $Y), { topic-name($T, $N), value($N, $SN), scope($N, shortname) } order by $X, $T? ------------------------------------------ (1) Locations with same name and same coordinates basename($T, $BN) :- topic-name($T, $N), value($N, $BN). select $T1, $T2 from basename($T1, $BN), gabx($T1, $X), gaby($T1, $Y), basename($T2, $BN), gabx($T2, $X), gaby($T2, $Y), $T1 /= $T2? ------------------------------------------ (2) Locations with same name and almost same coordinates import "math.tl" as func close-to($T1, $T2) :- gabx($T1, $X1), value($X1, $x1), gaby($T1, $Y1), value($Y1, $y1), gabx($T2, $X2), value($X2, $x2), gaby($T2, $Y2), value($Y2, $y2), func:ABS($x1 - $x2, < 10000), func:ABS($y1 - $y2, < 10000). same-name($T1, $T2) :- topic-name($T1, $N1), value($N1, $BN), topic-name($T2, $N2), value($N1, $BN). select $T1, $T2 from same-name($T1, $T2), close-to($T1, $T2), $T1 /= $T2? ========================================================================= */
This topic map (in LTM syntax) and the accompanying query (in tolog syntax) are used to illustrate Use Case #3 in N490 in conjunction with the RDF document that follows it. The topic map contains three documents, two authors, and three written-by associations.
/* ontological topics */ [document = "Document" @"http://psi.ontopia.net/document"] [author = "Author" @"http://psi.ontopia.net/author"] [written-by = "Written by" = "Author of" /document @"http://purl.org/dc/elements/1.1/creator"] /* documents */ [tao : document = "The TAO of Topic Maps"] [rdf : document = "Living with Topic Maps and RDF"] [tax : document = "Metadata? Thesauri? Taxonomies? Topic Maps!"] /* authors */ [larsga : author = "Lars Marius Garshol" @"http://psi.ontopia.net/person/larsga"] [pepper : author = "Steve Pepper" @"http://psi.ontopia.net/person/pepper"] /* associations */ written-by( tao : document, pepper : author ) written-by( tax : document, larsga : author ) written-by( rdf : document, larsga : author ) /* ========================================================================= Useful Queries --- Documents written by Lars Marius Garshol (this query can be used on the Topic Map, the RDF document, or both together): using dc for i"http://purl.org/dc/elements/1.1/" using onto for i"http://psi.ontopia.net/" using person for i"http://psi.ontopia.net/person/" dc:creator( $DOCUMENT : onto:document, person:larsga : onto:author )? ========================================================================= */
This RDF document describes one document and its authors. It also contains a mapping from RDF constructs to Topic Maps constructs.
<?xml version="1.0" encoding="ISO-8859-1"?> <rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:rtm="http://psi.ontopia.net/rdf2tm/#" > <rdf:Description rdf:about="http://psi.ontopia.net/xml"> <rdf:type rdf:resource="http://psi.ontopia.net/document"/> <rdfs:label>The XML Papers</rdfs:label> <dc:creator rdf:resource="http://psi.ontopia.net/person/pepper"/> <dc:creator rdf:resource="http://psi.ontopia.net/person/larsga"/> </rdf:Description> <rdf:Description rdf:about="http://psi.ontopia.net/person/larsga"> <rdf:type rdf:resource="http://psi.ontopia.net/author"/> <rdfs:label>Lars Marius Garshol</rdfs:label> </rdf:Description> <rdf:Description rdf:about="http://psi.ontopia.net/person/pepper"> <rdf:type rdf:resource="http://psi.ontopia.net/author"/> <rdfs:label>Steve Pepper</rdfs:label> </rdf:Description> <!-- mappings (= UC3 "X-ref ontology") --> <rdf:Description rdf:about="http://www.w3.org/1999/02/22-rdf-syntax-ns#type"> <rtm:maps-to rdf:resource="http://psi.ontopia.net/rdf2tm/#instance-of"/> </rdf:Description> <rdf:Description rdf:about="http://www.w3.org/2000/01/rdf-schema#label"> <rtm:maps-to rdf:resource="http://psi.ontopia.net/rdf2tm/#basename"/> </rdf:Description> <rdf:Description rdf:about="http://purl.org/dc/elements/1.1/creator"> <rtm:maps-to rdf:resource="http://psi.ontopia.net/rdf2tm/#association"/> <rtm:subject-role rdf:resource="http://psi.ontopia.net/document"/> <rtm:object-role rdf:resource="http://psi.ontopia.net/author"/> </rdf:Description> <rdf:Description rdf:about="http://purl.org/dc/elements/1.1/creator"> <rdfs:label>Written by</rdfs:label> </rdf:Description> <rdf:Description rdf:about="http://psi.ontopia.net/document"> <rdfs:label>Document</rdfs:label> </rdf:Description> <rdf:Description rdf:about="http://psi.ontopia.net/author"> <rdfs:label>Author</rdfs:label> </rdf:Description> </rdf:RDF>