4

we have domain models described in some xml format. Given the domain models I want to generate tooling that helps the testers/domain experts to express data in text (and a domain specific test framework later). IDE support is mandatory (IDEA or eclipse).

say, i have this pseudo model

User
fn string 120 chars mandatory
ln string 120 chars mandatory
address not-mandatory

Address
street mandatory
city mandatory

A typical usage scenario:

user opens the IDE
creates a new file
when content assist invoked, should give options 'user', 'address' etc

If I choose user, furthur ctrl-space should give 'fn', 'ln', 'address' as options. 

I know this can be done by xtext or jetbrains mps etc. But, I want to understand which technology lends for the following requirements.

  1. the models are fed to the system at run time (new, updates, deletes etc). so, I cannot have static set of grammars. How can I structure it so that the model/property assist is resolved at run time or at least the grammar is generated (may be a part of it)
  2. when I am working with one set of 'grammars' , if I point my target server to a different version (which may have different set of models) , I want the editor validate my existing files and flag errors.
  3. I get the data files in xml, text or via server lookups.
  4. It is very important for me to transform the models into some other format or interpret them in java/groovy.

for ex, I may have the following data file

user {
fn : Tom
ln : Jill 
hobby : movies
}

but, when I validate this file against a server which does not know 'hobby' property, I want the editor to mark error on that property.

I have plans to add much more functionality to this dsl/toolkit. Any hints which technology is more suitable ?

thanks

user19937
  • 587
  • 1
  • 7
  • 25
  • Your question needs a very big answer. How ever you can follow this link and see if there is any update. https://mps-support.jetbrains.com/hc/en-us/community/posts/206609185-textual-representation-of-the-module – Sanjit Kumar Mishra Sep 23 '16 at 14:18

1 Answers1

0

I know this can be done by xtext or jetbrains mps etc. But, I want to understand which technology lends for the following requirements.

I think Xtext is good for your requirements under the condition that you have (or can create) an XML schema for your XML domain models.

  1. the models are fed to the system at run time (new, updates, deletes etc). so, I cannot have static set of grammars. How can I structure it so that the model/property assist is resolved at run time or at least the grammar is generated (may be a part of it)

If I understand you correctly, you don't really need specific grammar rules for each XML data model but only cross references to the data model.

EMF has support for generating EMF Java classes from XSD files and Xtext can reference XML files conforming to the XSD schema if you add them to the Xtext index using your custom indexer (Xtext interface IDefaultResourceDescriptionStrategy). So you can create a normal Xtext project with grammar etc. for your DSL and use cross references that refer to your XML domain model.

  1. when I am working with one set of 'grammars' , if I point my target server to a different version (which may have different set of models) , I want the editor validate my existing files and flag errors.
  2. I get the data files in xml, text or via server lookups.

EMF uses URIs to identify resources so if you generate an Ecore model like I described, it should be possible to import the XML domain models using http:// or file:// (or whatever, it's extensible) URIs, or something that you internally resolve to URIs.

  1. It is very important for me to transform the models into some other format or interpret them in java/groovy.

Here you have the choice between making an interpreter, an Xbase inferrer or a generator (each of which can be implemented well using Xtend), depending on your requirements.

(Disclaimer: I am an employee at itemis, which is one of the main contributors to Xtext)

Bernhard Stadler
  • 1,725
  • 14
  • 24