You can subscribe to this list here.
| 2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(11) |
Jul
(32) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(23) |
| 2014 |
Jan
(12) |
Feb
|
Mar
(1) |
Apr
(4) |
May
(17) |
Jun
(14) |
Jul
(3) |
Aug
(26) |
Sep
(100) |
Oct
(42) |
Nov
(15) |
Dec
(6) |
| 2015 |
Jan
(3) |
Feb
|
Mar
(19) |
Apr
(4) |
May
(9) |
Jun
(4) |
Jul
(4) |
Aug
|
Sep
(2) |
Oct
(1) |
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(22) |
Dec
(22) |
| 2017 |
Jan
(5) |
Feb
(4) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
(1) |
Feb
(4) |
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
(12) |
Aug
(2) |
Sep
|
Oct
(2) |
Nov
(6) |
Dec
(1) |
| 2020 |
Jan
|
Feb
(3) |
Mar
(1) |
Apr
|
May
(6) |
Jun
(4) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
(5) |
May
(1) |
Jun
|
Jul
(8) |
Aug
(3) |
Sep
|
Oct
(7) |
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
|
2
|
3
|
4
(5) |
5
|
|
6
|
7
(1) |
8
|
9
|
10
|
11
|
12
|
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
|
27
|
28
|
29
|
30
|
31
|
|
|
|
From: Sarah K. <ske...@ca...> - 2017-08-07 08:25:27
|
> Whoops; good catch. I remember looking this up, and it seemed to me > that 'relative complement' was the more precise mathematical term. > That's the one I would prefer (and I believe the one that libsbml uses), > but if people would prefer the other, I'm OK with that, too. Does jsbml > use one or the other? Yes, of course, libSBML uses 'relativeComplement' and JSBML uses 'difference' :-) Does anyone else have a strong preference ?? Sarah |
|
From: Lucian S. <luc...@gm...> - 2017-08-04 20:03:40
|
On Fri, Aug 4, 2017 at 6:33 AM, Sarah Keating <ske...@ca...> wrote: > Cool - I like the small snippet type examples :-) > > Question(s): > > 1. In the spatialPoint and parametricObject objects they contain an array > of data that is found in the text element of the object and the snippets > make this nice and clear > BUT > a transformationComponent also contains an array of data which is > specified as being an attribute. It doesn't have a snippet but this seems > like an anomaly ?? > Hmm, yeah. Nobody has actually shown me what this would look like, but my impression is that it's always a very short array, like perhaps only 6 entries? If it was potentially hundreds of entries, I agree we should switch it to a text child, but if it's only six, leaving it as an attribute seems OK. Jim, if you have an example for this, I'd be happy to put it in. 2. The primitive types section list values of SetOperator as “union”, > “intersection”, and “difference” BUT the section 3.35.1 on the > CSGSetOperator says “union”, “intersection”, or “relativeComplement”. The > schema uses the first set which makes the csgOnly example invalid as it > uses relativeComplement. I'm not sure which is right ?? > Whoops; good catch. I remember looking this up, and it seemed to me that 'relative complement' was the more precise mathematical term. That's the one I would prefer (and I believe the one that libsbml uses), but if people would prefer the other, I'm OK with that, too. Does jsbml use one or the other? One of my goals was to make sure that the term was distinct from 'absolute complement', which is a different set theory function. https://en.wikipedia.org/wiki/Complement_(set_theory). > Note other than these issues the online validator will apply the > syntactical RNG checks in line with the latest spec. > Thanks for getting that up and running, once again! -Lucian |
|
From: Sarah K. <ske...@ca...> - 2017-08-04 13:48:46
|
Cool - I like the small snippet type examples :-) Question(s): 1. In the spatialPoint and parametricObject objects they contain an array of data that is found in the text element of the object and the snippets make this nice and clear BUT a transfromationComponent also contains an array of data which is specified as being an attribute. It doesn't have a snippet but this seems like an anomaly ?? 2. The primitive types section list values of SetOperator as “union”, “intersection”, and “difference” BUT the section 3.35.1 on the CSGSetOperator says “union”, “intersection”, or “relativeComplement”. The schema uses the first set which makes the csgOnly example invalid as it uses relativeComplement. I'm not sure which is right ?? Note other than these issues the online validator will apply the syntactical RNG checks in line with the latest spec. Sarah On 04/08/2017 01:10, Lucian Smith wrote: > A new version (0.92) of the spatial specification is now available at: > > https://sourceforge.net/p/sbml/code/HEAD/tree//trunk/specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml-l3v1-rel0.92.pdf > <https://sourceforge.net/p/sbml/code/HEAD/tree//trunk/specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml-l3v1-rel0.92.pdf> > > It features: > > * A new introduction by Robert Murphy, who is now added to the author page. > * More explanation about certain constructs that were confusing. > * Some more restrictions based on Sarah Keating's work at adding spatial > validation to libsbml. > * Several examples, both partial examples in the spec itself, and > complete files available at > https://sourceforge.net/p/sbml/code/HEAD/tree//trunk/specifications/sbml-level-3/version-1/spatial/specification/examples/ > <https://sourceforge.net/p/sbml/code/HEAD/tree//trunk/specifications/sbml-level-3/version-1/spatial/specification/examples/> > > I did notice that in the old example 'sampledfield_3d.xml', the three > domains were set up by using the 'sampledValue' attribute, in a > continuous field. This meant that there were very very few places in > the field that had that exact value, making the majority of the field > not belong to any domain, and making the few spots that were assigned to > a domain spotty and discontinuous. > > It seems likely to me that the intent of the model (whose three > sampledValue values were 0, 128, and 255), was 'anything with a value > *closest* to 0 is assigned to the first domain; any value that's closest > to 128 is the second, and anything closest to 255 is the last'. This, > however, is not my interpretation of the spec (someone correct me if I'm > wrong?) The intent of the 'sampledValue' attribute is to assign parts > of the geometry with *exactly* that value and *only* exactly that value > to the domain in question. This makes it handy for fields that have > nothing but a few values in it (a bunch of 0's, a bunch of 128's, and a > bunch of 255's), but is inappropriate for a field (like the one > provided) where sample values range anywhere from 0 to 255. > > Because of this, I changed the example to use 'minValue' and 'maxValue' > for each domain instead. I set the first domain to apply for all values > from 0 to 64, the second for all values from 64 to 192, and the last for > all values from 192 to 256. > > This did make me notice one thing: what to do about values that are > exactly on the border between one domain and the next? Because of this, > I modified the spec slightly to say that 'minValue' was *inclusive* and > 'maxValue' was *exclusive'. So, in the example, values of exactly 64 > are assigned to the second domain, and values of 192 are assigned to the > third. (And since the highest value in the field is 255, I set the > third domain's max to 256.) > > This wouldn't make a big difference for most continuous fields, where > any one value would only exist at a 2d boundary in a 3d space, but it > would make a difference when whole areas of the field were assigned the > exact same value: even extrapolating between the lattice points would > still give you a 3d space that would otherwise be ambiguous as to which > domain to assign it. > > If this makes sense, great! But if you think something else should be > done, or if you think the new spec needs to be clearer about what it > means to set things up this way, let me know. > > -Lucian > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > sbml-spatial mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-spatial > |
|
From: Lucian S. <luc...@gm...> - 2017-08-04 00:16:18
|
Following up on this other old message, too, since I addressed these things in the new spec. Thanks for the questions, Sarah! On Fri, Feb 3, 2017 at 7:52 AM, Sarah Keating <ske...@ca...> wrote: > Hi Guys > > I'm implementing validation of the spatial specification and I have the > following question: > > How strictly do we enforce the coordinate system with the sampled fields. > > With the disclaimer that geometry does my head in and it is Friday > afternoon - so I may just be missing the point ... > > We could be very rigid and say if you have declared 3 dimensions you > must use all of them and only them - or we could be relaxed and just > assume that if a geometry declares 3 coordinateComponents and a > sampledField only 2 numSamples then it is intended to be 2 dimensions. > The SampledField is designed to fill the Geometry with values at lattice points, that are interpolated everywhere else in the Geometry. If a Geometry has three dimensions, you can't fill that with only two: you really must define all three dimensions, and have non-zero values for each. I've updated the spec to reflect this (and made the type of 'numSamples*' to be 'positive int', since zero and negative values are nonsensical). Things that came to mind while contemplating this: > > 1. Would the following be valid : > > <spatial:listOfSampledFields> > <spatial:sampledField > spatial:compression="uncompressed" spatial:dataType="double" > spatial:id="sampledField_1" spatial:interpolationType="nearestNeighbor" > spatial:numSamples1="0" spatial:numSamples2="0" spatial:numSamples3="0" > spatial:samplesLength="0"/> > </spatial:listOfSampledFields> > > It is quite clearly stating that there is no data - > > samplesLength = numSamples1 + numSamples2 + numSamples3 = 0 > If you have no data, you can't fill a Geometry with values. Also, samplesLength is 'numSamples1 * numSamples2 * numSamples3': they are the dimensions of the data, not summed arrays (and another reason why none of those values can be zero). > 2. Should there also be rules that relate the numSamples attributes to > the number of CoordinateComponents such as if the geometry has a > cartesianX and cartesianY coordinateComponents then it should have > numSamples1 and numSamples2 but NOT numSamples3. > > This would mean that my zero data sampledField was only valid if the > geometry had all three coordinateComponents. > Yes, there should indeed be a rule about the numSamples attributes matching the corresponding axes in the geometry. Added! > 3. So if there is a sampledField MUST there be at least one > coordinateComponent ?? > Yes. In fact, there *always* should be at least one coordinateComponent. Updated the spec to say this. > 4. Can a geometry have more than one coordinateComponent of type > cartesianX ? I'm guessing not but that needs to be much more specific. > No, you can only have one of each. Added! > 5. What does a sampledField means if it uses numSamples1 and numSamples3 > with a 2-D array ? Do we assume they meant x and y or is this invalid ? > Or could you have a 3d geometry but a sampled field that was only in the > x-z plane and declare it this way ? > I think allowing someone to use just numSamples1 and numSamples3 would be confusing and non-intuitive, and much easier to catch with a validation rule, so I'm adding in language in the spec to add this. Again, the SampledField fills the entire Geometry with numbers. There is no way to define a sampled field of a lower dimensionality than the Geometry itself. I've tried to update the spec to reflect this. > I'm not advocating either being ultra-strict or ultra-relaxed; but > reading the specification does not make it clear how accurate I need to be. > Thanks so much for going through it! These are indeed important. The new spec (0.92) has these refinements in it and some more examples, but nothing fundamental should change. I haven't added the actual validation rules, just updated the spec to talk about what's required; I'll talk with you about how best to add the validation rules themselves, and where, since the current list is I think fully auto-generated. -Lucian |
|
From: Lucian S. <luc...@gm...> - 2017-08-04 00:13:27
|
In going over old sbml-spatial posts in preparation for a new release of
the spec, I found this unanswered email from Devin from October... of
2015. So, in case the issues brought up were not resolved, in possibly the
longest delay in answering an email ever...
On Fri, Oct 16, 2015 at 4:32 AM, Devin Sullivan <
dev...@sc...> wrote:
> The big changes:
> CSGOnly - We now have "nested" rotations. I think I'm reading the spec
> right, but if someone can comment on what they think of this implementation
> that would help a great deal.
>
Here's a bit from your CSGOnly file:
<spatial:csgObject spatial:domainType="EC"
spatial:id="EC" spatial:ordinal="0">
<spatial:csgTransformation>
<spatial:csgTranslation
spatial:translateX="11.3951" spatial:translateY="14.7822"
spatial:translateZ="1.0043"/>
<spatial:csgScale spatial:id="scale"
spatial:scaleX="13.8529" spatial:scaleY="17.3686" spatial:scaleZ="2.0295"/>
</spatial:csgTransformation>
<spatial:csgPrimitive spatial:primitiveType="cube"/>
</spatial:csgObject>
Unfortunately, this doesn't match the spec yet. (Though perhaps in the
intervening years, you've changed it!) The problem is that any
translation/scale/rotation performs its transformation on its *child*
object, like a function works on its argument.. Your 'csgObject' element
has *two* children instead of just one; you also use 'csgTransformation',
when it's actually just an abstract class that the various particular
transformations inherit from.
What I assume you want is something like 'take a cube, scale it, and
translate it." If it helps, think of 'translate' as a function, and
'scale' as a function, that both take a single argument (a shape) and
return a slightly different shape. What you would want, then, is to write
a function like:
translate(scale(cube))
In XML, it would look like:
<spatial:csgObject spatial:domainType="EC"
spatial:id="EC" spatial:ordinal="0">
<spatial:csgTranslation spatial:translateX="11.3951"
spatial:translateY="14.7822" spatial:translateZ="1.0043"/>
<spatial:csgScale spatial:id="scale"
spatial:scaleX="13.8529" spatial:scaleY="17.3686" spatial:scaleZ="2.0295"/>
<spatial:csgPrimitive
spatial:primitiveType="cube"/>
</spatial:csgScale >
</spatial:csgTranslation>
</spatial:csgObject>
which, in English, would mean "Take a primitive cube, centered at the
origin, scale it by [13.8529, 17.3686, 2.0295], then translate the result
by [11.3951, 14.7822, 1.0043]."
I've tried to update the new spec to be more clear about this; let me know
if it is still confusing. I did add an example, which will hopefully help
(section 3.43).
>
> MeshOnly (not attached) - According to the above examples and the spec,
> the SpatialPoints is now a separate node of the ParametricGeometry class. I
> don't get this.
> Say I have 2 parametric geometries.
> Previously I would define the list of "spatialPoints" (vertices) and the
> list of "faces" as fields in each ParametricObject.
> In the new examples it looks like spatialPoints have been moved outside of
> the listOfParametericObjects and ParametricObject. So in my example where I
> now have two (or any number of) parametric objects, I would have to define
> all the spatialPoints (vertices) before defining the connections between
> that list in my "faces" which is now just a numerical string appended to
> the parametricObject node with no attribute name.
>
> This seems strange to me. Let's now assume I have 1000 objects. I want to
> select just the first object from the model. I have to look through the
> spatialPoints (vertices) for all the vertices in my 1000 objects to find
> which are being connected in my Parametric object. In other words, it makes
> separating parametric objects much harder. Also, if I delete a parametric
> object, I now need to re-define all the faces listed in my parametric
> object because I removed those vertices from my list of defined vertices.
>
> Can someone explain the rational behind this change, or maybe I'm just
> interpreting incorrectly.
>
Having a common set of points for multiple parametric objects is so that
your parametric objects can share vertices and/or faces. The points define
the lattice, basically, and the objects draw shapes using the points in
that lattice. If two domains adjoin each other, then, if you use the same
indexes for both, you'll be assured that they will line up correctly. So
if you're selecting one object, it's defined by its list of point indices,
and the location of those points is defined by the lattice. If you delete
a parametric object, you don't need to delete the corresponding points in
the lattice: if a point in the lattice is unused, that's fine: it's just a
point in space. It doesn't define an object until its index is used.
So, if you select the first object from the model, you get a list of
indices. The location of those indices are defined by the SpatialPoints
object, along with some extra indices that are unused by that object.
Hopefully, those extra points are easily ignored.
When you delete a parametric object, you just delete the
<parametricObject>, and it's gone. Perhaps it used some indices from the
SpatialPoints that are now no longer being used, but that's OK; there's no
requirement that each index gets used in some ParametricObject.
When you add a new parametricObject, you re-use the index of any point in
that object that was in the list of SpatialPoints already, and add any new
point to the end of that list. Those new points will not be used by any
existing parametricObject, because all of the indices are higher than any
used thus far. (For that matter, there's no requirement that you re-use
points that were used before--it's perfectly fine to have index 40 and
index 399 refer to the same location in space. I suppose the only
restriction then is to not then use index 40 and index 399 in the same face
of a single object.)
Does that make more sense? Does it address your concerns? I'm again
trying to address this more specifically in the 0.92 spec.
Thanks, and apologies again for not answering this back when you first
posted it.
-Lucian
|
|
From: Lucian S. <luc...@gm...> - 2017-08-04 00:10:26
|
A new version (0.92) of the spatial specification is now available at: https://sourceforge.net/p/sbml/code/HEAD/tree//trunk/ specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml- l3v1-rel0.92.pdf It features: * A new introduction by Robert Murphy, who is now added to the author page. * More explanation about certain constructs that were confusing. * Some more restrictions based on Sarah Keating's work at adding spatial validation to libsbml. * Several examples, both partial examples in the spec itself, and complete files available at https://sourceforge.net/p/sbml/code/HEAD/tree//trunk/ specifications/sbml-level-3/version-1/spatial/specification/examples/ I did notice that in the old example 'sampledfield_3d.xml', the three domains were set up by using the 'sampledValue' attribute, in a continuous field. This meant that there were very very few places in the field that had that exact value, making the majority of the field not belong to any domain, and making the few spots that were assigned to a domain spotty and discontinuous. It seems likely to me that the intent of the model (whose three sampledValue values were 0, 128, and 255), was 'anything with a value *closest* to 0 is assigned to the first domain; any value that's closest to 128 is the second, and anything closest to 255 is the last'. This, however, is not my interpretation of the spec (someone correct me if I'm wrong?) The intent of the 'sampledValue' attribute is to assign parts of the geometry with *exactly* that value and *only* exactly that value to the domain in question. This makes it handy for fields that have nothing but a few values in it (a bunch of 0's, a bunch of 128's, and a bunch of 255's), but is inappropriate for a field (like the one provided) where sample values range anywhere from 0 to 255. Because of this, I changed the example to use 'minValue' and 'maxValue' for each domain instead. I set the first domain to apply for all values from 0 to 64, the second for all values from 64 to 192, and the last for all values from 192 to 256. This did make me notice one thing: what to do about values that are exactly on the border between one domain and the next? Because of this, I modified the spec slightly to say that 'minValue' was *inclusive* and 'maxValue' was *exclusive'. So, in the example, values of exactly 64 are assigned to the second domain, and values of 192 are assigned to the third. (And since the highest value in the field is 255, I set the third domain's max to 256.) This wouldn't make a big difference for most continuous fields, where any one value would only exist at a 2d boundary in a 3d space, but it would make a difference when whole areas of the field were assigned the exact same value: even extrapolating between the lattice points would still give you a 3d space that would otherwise be ambiguous as to which domain to assign it. If this makes sense, great! But if you think something else should be done, or if you think the new spec needs to be clearer about what it means to set things up this way, let me know. -Lucian |