Jetty and Loom

Greg Wilkins gregw at webtide.com
Tue Jan 5 15:59:35 UTC 2021


Ron,

I just ran again, without native memory monitoring (that messes with the
heap a bit) and for kernel threads I got to the following (usual 32K
threads achieve):

[19.498s][info][gc] GC(75) Pause Young (Normal) (G1 Evacuation Pause)
41M->31M(84M) 25.609ms
[19.552s][info][gc] GC(76) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 39M->39M(84M) 13.343ms
[19.552s][info][gc] GC(77) Concurrent Mark Cycle
[19.562s][info][gc] GC(77) Pause Remark 41M->41M(100M) 3.092ms
[19.566s][info][gc] GC(77) Pause Cleanup 41M->41M(100M) 0.063ms
[19.567s][info][gc] GC(77) Concurrent Mark Cycle 14.628ms
[19.585s][info][gc] GC(78) Pause Young (Prepare Mixed) (G1 Evacuation
Pause) 43M->43M(100M) 13.047ms
[19.612s][info][gc] GC(79) Pause Young (Mixed) (G1 Evacuation Pause)
47M->48M(304M) 11.311ms
[19.841s][info][gc] GC(80) Pause Young (Normal) (G1 Evacuation Pause)
124M->71M(304M) 18.208ms
[19.993s][info][gc] GC(81) Pause Young (Normal) (G1 Evacuation Pause)
147M->74M(304M) 15.226ms
[20.118s][info][gc] GC(82) Pause Young (Normal) (G1 Evacuation Pause)
154M->74M(304M) 17.345ms
[20.267s][info][gc] GC(83) Pause Young (Normal) (G1 Evacuation Pause)
166M->74M(732M) 22.577ms
[20.838s][info][gc] GC(84) Pause Young (Normal) (G1 Evacuation Pause)
394M->76M(732M) 28.790ms
[21.421s][info][gc] GC(85) Pause Young (Normal) (G1 Evacuation Pause)
464M->79M(732M) 41.358ms
[22.070s][info][gc] GC(86) Pause Young (Normal) (G1 Evacuation Pause)
499M->82M(732M) 52.017ms
[22.739s][info][gc] GC(87) Pause Young (Normal) (G1 Evacuation Pause)
506M->85M(1432M) 71.281ms
[24.309s][info][gc] GC(88) Pause Young (Normal) (G1 Evacuation Pause)
925M->90M(1432M) 95.097ms
[25.646s][info][gc] GC(89) Pause Young (Normal) (G1 Evacuation Pause)
926M->96M(1432M) 128.290ms
[27.035s][info][gc] GC(90) Pause Young (Normal) (G1 Evacuation Pause)
928M->102M(1432M) 147.318ms
[28.418s][info][gc] GC(91) Pause Young (Normal) (G1 Evacuation Pause)
926M->108M(2528M) 174.245ms
[31.446s][info][gc] GC(92) Pause Young (Normal) (G1 Evacuation Pause)
1588M->118M(2528M) 226.050ms
[34.167s][info][gc] GC(93) Pause Young (Normal) (G1 Evacuation Pause)
1586M->127M(2528M) 278.373ms
[37.364s][info][gc] GC(94) Pause Young (Normal) (G1 Evacuation Pause)
1587M->138M(2528M) 349.308ms
[40.231s][info][gc] GC(95) Pause Young (Normal) (G1 Evacuation Pause)
1586M->148M(3296M) 379.033ms
[44.606s][info][gc] GC(96) Pause Young (Normal) (G1 Evacuation Pause)
2048M->160M(3296M) 463.962ms
[50.264s][info][gc] GC(97) Pause Young (Normal) (G1 Evacuation Pause)
2048M->173M(3296M) 481.924ms
[55.432s][info][gc] GC(98) Pause Young (Normal) (G1 Evacuation Pause)
2049M->186M(3296M) 539.238ms



for virtual threads I got to 43K threads before a really bad GC with GC log:

[19.874s][info][gc] GC(75) Pause Young (Normal) (G1 Evacuation Pause)
41M->40M(92M) 20.038ms
[19.928s][info][gc] GC(76) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 48M->48M(92M) 8.715ms
[19.928s][info][gc] GC(77) Concurrent Mark Cycle
[19.955s][info][gc] GC(78) Pause Young (Normal) (G1 Evacuation Pause)
52M->52M(92M) 3.126ms
[19.987s][info][gc] GC(77) Pause Remark 56M->56M(120M) 1.514ms
[19.990s][info][gc] GC(79) Pause Young (Normal) (G1 Evacuation Pause)
56M->56M(312M) 3.780ms
[20.019s][info][gc] GC(77) Pause Cleanup 62M->62M(312M) 0.110ms
[20.021s][info][gc] GC(77) Concurrent Mark Cycle 92.277ms
[20.118s][info][gc] GC(80) Pause Young (Normal) (G1 Evacuation Pause)
132M->88M(312M) 5.825ms
[20.172s][info][gc] GC(81) Pause Young (Normal) (G1 Evacuation Pause)
156M->106M(312M) 4.373ms
[20.222s][info][gc] GC(82) Pause Young (Normal) (G1 Evacuation Pause)
170M->123M(312M) 3.344ms
[20.273s][info][gc] GC(83) Pause Young (Normal) (G1 Evacuation Pause)
187M->140M(312M) 4.386ms
[20.321s][info][gc] GC(84) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 200M->156M(568M) 5.126ms
[20.321s][info][gc] GC(85) Concurrent Mark Cycle
[20.359s][info][gc] GC(85) Pause Remark 190M->190M(568M) 0.788ms
[20.369s][info][gc] GC(85) Pause Cleanup 199M->199M(568M) 0.079ms
[20.371s][info][gc] GC(85) Concurrent Mark Cycle 49.480ms
[20.499s][info][gc] GC(86) Pause Young (Normal) (G1 Evacuation Pause)
320M->197M(568M) 9.008ms
[20.617s][info][gc] GC(87) Pause Young (Normal) (G1 Evacuation Pause)
349M->236M(568M) 8.947ms
[20.727s][info][gc] GC(88) Pause Young (Normal) (G1 Evacuation Pause)
372M->271M(568M) 8.750ms
[20.827s][info][gc] GC(89) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 391M->301M(568M) 9.644ms
[20.827s][info][gc] GC(90) Concurrent Mark Cycle
[21.078s][info][gc] GC(91) Pause Young (Normal) (G1 Evacuation Pause)
405M->328M(1080M) 11.989ms
[21.567s][info][gc] GC(92) Pause Young (Normal) (G1 Evacuation Pause)
692M->420M(1080M) 24.249ms
[21.880s][info][gc] GC(93) Pause Young (Normal) (G1 Evacuation Pause)
728M->499M(1080M) 22.853ms
[22.108s][info][gc] GC(90) Pause Remark 717M->717M(1228M) 1.022ms
[22.161s][info][gc] GC(90) Pause Cleanup 766M->766M(1228M) 0.110ms
[22.165s][info][gc] GC(90) Concurrent Mark Cycle *1338.339ms*
[22.198s][info][gc] GC(94) Pause Young (Normal) (G1 Evacuation Pause)
771M->568M(1228M) 30.046ms
[22.513s][info][gc] GC(95) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 880M->647M(1228M) 40.264ms
[22.513s][info][gc] GC(96) Concurrent Mark Cycle
[23.166s][info][gc] GC(97) Pause Young (Normal) (G1 Evacuation Pause)
911M->714M(3424M) 37.546ms
[24.947s][info][gc] GC(98) Pause Young (Normal) (G1 Evacuation Pause)
2054M->1056M(3424M) 77.930ms
[25.650s][info][gc] GC(99) Pause Young (Normal) (G1 Evacuation Pause)
1700M->1220M(3424M) 72.811ms
[26.181s][info][gc] GC(100) Pause Young (Normal) (G1 Evacuation Pause)
1636M->1326M(3424M) 68.893ms
[26.549s][info][gc] GC(101) Pause Young (Normal) (G1 Evacuation Pause)
1530M->1379M(3424M) 64.428ms
[26.903s][info][gc] GC(96) Pause Remark 1703M->1703M(3424M) 0.791ms
[27.040s][info][gc] GC(96) Pause Cleanup 1883M->1883M(3424M) 0.138ms
[27.052s][info][gc] GC(96) Concurrent Mark Cycle *4538.789ms*
[27.407s][info][gc] GC(102) Pause Young (Normal) (G1 Evacuation Pause)
2291M->1611M(3424M) 75.868ms
[28.131s][info][gc] GC(103) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 2443M->1822M(3424M) 89.031ms
[28.131s][info][gc] GC(104) Concurrent Mark Cycle
[29.779s][info][gc] GC(105) Pause Young (Normal) (G1 Evacuation Pause)
2202M->1920M(4320M) 78.362ms
*SLOW 1181ms* (10,462 threads)
[30.163s][info][gc] GC(106) Pause Young (Normal) (G1 Evacuation Pause)
2140M->1976M(4320M) 78.550ms
[31.474s][info][gc] GC(107) Pause Young (Normal) (G1 Evacuation Pause)
3088M->2259M(4320M) 78.653ms
[32.507s][info][gc] GC(108) Pause Young (Normal) (G1 Evacuation Pause)
3259M->2515M(4320M) 87.112ms
[32.890s][info][gc] GC(109) Pause Young (Normal) (G1 Evacuation Pause)
2787M->2584M(4320M) 78.838ms
[33.741s][info][gc] GC(110) Pause Young (Normal) (G1 Evacuation Pause)
3400M->2789M(4320M) 74.854ms
[34.560s][info][gc] GC(111) Pause Young (Normal) (G1 Evacuation Pause)
3477M->2965M(4984M) 83.781ms
[35.021s][info][gc] GC(112) Pause Young (Normal) (G1 Evacuation Pause)
3197M->3023M(4984M) 77.760ms
[35.858s][info][gc] GC(113) Pause Young (Normal) (G1 Evacuation Pause)
3659M->3184M(4984M) 61.810ms
[36.736s][info][gc] GC(114) Pause Young (Normal) (G1 Evacuation Pause)
4000M->3392M(4984M) 77.678ms
[37.125s][info][gc] GC(115) Pause Young (Normal) (G1 Evacuation Pause)
3660M->3460M(4984M) 67.229ms
[37.834s][info][gc] GC(116) Pause Young (Normal) (G1 Evacuation Pause)
4100M->3622M(5428M) 71.500ms
[38.825s][info][gc] GC(117) Pause Young (Normal) (G1 Evacuation Pause)
4414M->3825M(5428M) 84.618ms
[39.223s][info][gc] GC(118) Pause Young (Normal) (G1 Evacuation Pause)
4089M->3892M(5428M) 71.304ms
[39.903s][info][gc] GC(119) Pause Young (Normal) (G1 Evacuation Pause)
4508M->4047M(5428M) 68.401ms
[40.529s][info][gc] GC(120) Pause Young (Normal) (G1 Evacuation Pause)
4567M->4181M(5696M) 73.192ms
[41.026s][info][gc] GC(121) Pause Young (Normal) (G1 Evacuation Pause)
4449M->4248M(5696M) 75.728ms
[41.602s][info][gc] GC(122) Pause Young (Normal) (G1 Evacuation Pause)
4792M->4386M(5696M) 72.129ms
[42.172s][info][gc] GC(123) Pause Young (Normal) (G1 Evacuation Pause)
4842M->4502M(5696M) 63.920ms
[42.593s][info][gc] GC(124) Pause Young (Normal) (G1 Evacuation Pause)
4762M->4568M(6000M) 62.869ms
[43.268s][info][gc] GC(125) Pause Young (Normal) (G1 Evacuation Pause)
5084M->4700M(6000M) 68.014ms
[43.766s][info][gc] GC(126) Pause Young (Normal) (G1 Evacuation Pause)
5076M->4795M(6000M) 69.085ms
[44.246s][info][gc] GC(127) Pause Young (Normal) (G1 Evacuation Pause)
5167M->4890M(6000M) 69.596ms
[44.740s][info][gc] GC(128) Pause Young (Normal) (G1 Evacuation Pause)
5202M->4969M(6300M) 78.933ms
[45.258s][info][gc] GC(104) Pause Remark 5367M->5367M(8020M) 3.784ms
[45.372s][info][gc] GC(129) Pause Young (Normal) (G1 Evacuation Pause)
5401M->5078M(8020M) 79.010ms
[46.176s][info][gc] GC(130) Pause Young (Normal) (G1 Evacuation Pause)
5634M->5220M(8020M) 85.153ms
[46.209s][info][gc] GC(104) Concurrent Mark Cycle *18078.519ms*
[46.749s][info][gc] GC(131) Pause Young (Normal) (G1 Evacuation Pause)
5728M->5350M(8020M) 86.881ms
[47.196s][info][gc] GC(132) Pause Young (Concurrent Start) (G1 Evacuation
Pause) 5714M->5442M(8020M) 86.186ms
[47.196s][info][gc] GC(133) Concurrent Mark Cycle
[48.186s][info][gc] GC(134) Pause Young (Normal) (G1 Evacuation Pause)
5810M->5535M(8020M) 78.621ms
[48.734s][info][gc] GC(135) Pause Young (Normal) (G1 Evacuation Pause)
5903M->5628M(8020M) 81.513ms
[49.267s][info][gc] GC(136) Pause Young (Normal) (G1 Evacuation Pause)
5996M->5721M(8020M) 88.816ms
[49.763s][info][gc] GC(137) Pause Young (Normal) (G1 Evacuation Pause)
6089M->5813M(8020M) 86.581ms
[50.292s][info][gc] GC(138) Pause Young (Normal) (G1 Evacuation Pause)
6181M->5906M(8020M) 80.532ms
[50.848s][info][gc] GC(139) Pause Young (Normal) (G1 Evacuation Pause)
6274M->5999M(8020M) 89.918ms
[51.398s][info][gc] GC(140) Pause Young (Normal) (G1 Evacuation Pause)
6367M->6093M(8020M) 82.920ms
[51.940s][info][gc] GC(141) Pause Young (Normal) (G1 Evacuation Pause)
6461M->6186M(8020M) 87.611ms
[52.443s][info][gc] GC(142) Pause Young (Normal) (G1 Evacuation Pause)
6554M->6277M(8020M) 88.283ms
[53.014s][info][gc] GC(143) Pause Young (Normal) (G1 Evacuation Pause)
6645M->6370M(8020M) 90.883ms
[53.659s][info][gc] GC(144) Pause Young (Normal) (G1 Evacuation Pause)
6738M->6463M(8020M) 89.134ms
[54.211s][info][gc] GC(145) Pause Young (Normal) (G1 Evacuation Pause)
6835M->6558M(8020M) 90.678ms
[54.797s][info][gc] GC(146) Pause Young (Normal) (G1 Evacuation Pause)
6926M->6650M(8020M) 93.411ms
[55.354s][info][gc] GC(147) Pause Young (Normal) (G1 Evacuation Pause)
7018M->6742M(8020M) 88.752ms
[55.962s][info][gc] GC(148) Pause Young (Normal) (G1 Evacuation Pause)
7114M->6836M(8020M) 83.537ms
[56.563s][info][gc] GC(149) Pause Young (Normal) (G1 Evacuation Pause)
7208M->6931M(8020M) 84.734ms
[57.144s][info][gc] GC(150) Pause Young (Normal) (G1 Evacuation Pause)
7299M->7023M(8020M) 87.452ms
[57.757s][info][gc] GC(151) Pause Young (Normal) (G1 Evacuation Pause)
7391M->7116M(8020M) 87.077ms
[58.351s][info][gc] GC(152) Pause Young (Normal) (G1 Evacuation Pause)
7484M->7210M(8020M) 92.428ms
[58.854s][info][gc] GC(153) Pause Young (Normal) (G1 Evacuation Pause)
7578M->7302M(8020M) 90.198ms
[59.402s][info][gc] GC(154) Pause Young (Normal) (G1 Evacuation Pause)
7674M->7396M(8020M) 87.753ms
[59.910s][info][gc] GC(155) Pause Young (Normal) (G1 Evacuation Pause)
7764M->7489M(8020M) 87.074ms
[60.443s][info][gc] GC(156) Pause Young (Normal) (G1 Evacuation Pause)
7857M->7583M(8020M) 88.563ms

*[61.017s][info][gc] GC(157) To-space exhausted*[61.017s][info][gc] GC(157)
Pause Young (Normal) (G1 Evacuation Pause) 7951M->7874M(8020M) 93.446ms

*[61.242s][info][gc] GC(158) To-space exhausted*[61.242s][info][gc] GC(158)
Pause Young (Normal) (G1 Evacuation Pause) 8002M->8002M(8020M) 80.639ms
[91.873s][info][gc] GC(159) Pause Full (G1 Evacuation Pause)
8002M->7493M(8020M) 30631.710ms



*[91.874s][info][gc] GC(133) Concurrent Mark Cycle 44677.721msSLOW
30714msTOO SLOW!!!*

The first SLOW report happened at  10,462 threads. The time reported by
"SLOW" is the time from spawning util waking up from the latch that is
counted down when thread is running and the max stack depth is reached.

The last really long SLOW is pretty much at maximum heap, so that's not an
entirely fair part of the test.

Note also that even with the GCs virtual threads reached 32K in 51s, so
faster than kernel threads 55s.

cheers



On Tue, 5 Jan 2021 at 15:58, Ron Pressler <ron.pressler at oracle.com> wrote:

> BTW, just to let me understand if there’s something of interest in the deep
> stacks case, when the Loom test had a 1.5 GC pause and the platform threads
> had zero, how many actual GC collections happened in the platform thread
> case?
>
> The reasons I’m asking is that it’s possible that the work the GC does is
> the
> same in both cases, it’s just that a GC was never triggered in the
> platform thread
> case, but in a real application it will.
>
> If a GC is triggered in both cases, then the two cases *should* require
> similar
> amount of work in the GC, but due to a bug in Loom, virtual threads *may*
> require
> more. If that is the case, that’s another thing to fix.
>
> — Ron
>
>
> On 5 January 2021 at 14:47:22, Ron Pressler (ron.pressler at oracle.com
> (mailto:ron.pressler at oracle.com)) wrote:
>
> > Both the 4% CPU increase and GC pauses (I’ll get to what I mean later)
> are
> > bugs that we’ll try to fix. Especially the GC interaction uses code that
> is
> > currently at constant flux and is known to be suboptimal. I’m genuinely
> happy
> > that you’ve reported those bugs, but bugs are not limitations of the
> model.
> >
> > Having said that, the interesting thing I see in the GC behaviour may
> not be
> > what you think is interesting. I don’t think the deep-stack test actually
> > exposed a problem of any kind, because when two things have slightly
> different
> > kinds of overhead, you can easily reduce the actual work to zero, and
> > make the impact of overhead as high as you like, but that’s not
> interesting
> > for real work. I could be wrong, but I’ll give it another look.
> >
> > The one thing in the posts — and thank you for them! — that immediately
> flashed
> > in blinking red to me as some serious issue is the following:
> >
> > Platform:
> >
> > Elapsed Time: 10568 ms
> > Time in Young GC: 5 ms (2 collections)
> > Time in Old GC: 0 ms (0 collections)
> >
> > Virtual:
> >
> > Elapsed Time: 10560 ms
> > Time in Young GC: 23 ms (8 collections)
> > Time in Old GC: 0 ms (0 collections)
> >
> > See that increase in young collection pause? That is the one thing that
> > actually touches on some core issue re virtual-threads’ design (they
> interact
> > with the GC and GC barriers in a particular way that could change the
> young
> > collection), and might signify a potentially serious bug.
> >
> > And no, it is not only not obvious but downright wrong that moving
> stacks from
> > C stacks to the Java heap increases GC work assuming there is actual real
> > work in the system, too. GCs generally don’t work like people imagine
> they do.
> > The reason I said that GC work might be reduced is because of some
> internal
> > details: the virtual thread stacks is mutated in a special way and at a
> special
> > time so that it doesn’t require GC barriers; this is not true for Java
> objects
> > in general.
> >
> > I’m reminded that about a year ago, I think, I saw a post about some
> product
> > written in Java. The product appears to be very good, but the post said
> > something specific that induced a face-palm. They said that their
> product is
> > GC “transparent” because they do all their allocations upfront. I
> suggested
> > that instead of just using Parallel GC they try with G1, and immediately
> > they came back, surprised, that they’d seen a 15% performance hit. The
> reason
> > is that allocations and mutation (and even reads) cause different work at
> > different times by different GC, and mutating one object at one specific
> time
> > might be more or less costly than allocating a new one, depending on the
> > GC and depending on the precise usage and timing of that particular
> object.
> >
> > The lesson is that trying to reverse engineer and out-think the VM is not
> > only futile — not only because there are too many variables but also
> because
> > the implementation is constantly changing — but it can result in
> downright bad
> > advice that’s a result of overfitting the advice to very particular
> circumstances.
> >
> > Instead, it’s important to focus on generalities. The goal of project
> Loom
> > is to make resource management around scheduling easy and efficient.
> When it
> > doesn’t do that, it’s a bug. I don’t agree at all with your
> characterisation
> > of what’s a limitation and what isn’t, but I don’t care: think of them
> however
> > you like. If you find bugs, we all win! But try harder, because I think
> you’ve
> > just scratched the surface.
> >
> > — Ron
> >
> >
> > On 5 January 2021 at 13:58:40, Greg Wilkins (gregw at webtide.com (mailto:
> gregw at webtide.com)) wrote:
> >
> > >
> > > Ron,
> > >
> > > On Tue, 5 Jan 2021 at 13:19, Ron Pressler wrote:
> > > > If the listener might think it means that virtual
> > > > threads somehow *harm* the execution of CPU bound tasks, then it’s
> misleading.
> > > I've demonstrated (
> https://urldefense.com/v3/__https://github.com/webtide/loom-trial/blob/main/src/main/java/org/webtide/loom/CPUBound.java__;!!GqivPVa7Brio!M7gVGdjgN0hBofV52hMhQySIqxgmoHq9HlhG63v-LUCbnq63I7VVbwkfuC4c-kCx-g$)
> that using virtual threads can defer CPU bound tasks
> > > I've demonstrated (
> https://urldefense.com/v3/__https://webtide.com/do-looms-claims-stack-up-part-2/__;!!GqivPVa7Brio!M7gVGdjgN0hBofV52hMhQySIqxgmoHq9HlhG63v-LUCbnq63I7VVbwkfuC6tKeoB4Q$)
> that using virtual threads can double the CPU usage over pool kernel
> threads. Even their best usage in my tests has a 4% CPU usage increase.
> > >
> > > > The “additional load on GC” statement is not, I believe,
> demonstrated.
> > >
> > > I've demonstrated (
> https://urldefense.com/v3/__https://webtide.com/do-looms-claims-stack-up-part-1/__;!!GqivPVa7Brio!M7gVGdjgN0hBofV52hMhQySIqxgmoHq9HlhG63v-LUCbnq63I7VVbwkfuC45Vhd96A$)
> 1.5s GC pauses when using virtual threads at levels that kernel threads
> handle without pause.
> > >
> > > Besides, isn't it self evident that moving stacks from static kernel
> memory to dynamic heap is going to have additional GC load? You've even
> described how recycling virtual threads will not help reduce that
> additional load on the GC as a reason not to pool virtual threads!
> > >
> > > > It is tautologically true that if your use case does not benefit
> from virtual
> > > > threads then it does not benefit from virtual threads.
> > >
> > > Indeed, but half the time it is not clear that you acknowledge that
> there are use cases that are not suitable for virtual threads. Just
> paragraphs above you are implying that there is "no *harm*" to use virtual
> threads for CPU Bound tasks!
> > >
> > > > > Totally confused by the messaging from this project.
> > > > I’m confused by what you find confusing.
> > >
> > > This is not just a hive mind issue as the messaging just from you is
> inconsistent. One moment you are happy to describe limitations of virtual
> threads and agree that there are use cases that do benefit. Then the next
> moment we are back to "what limitations", "no harm" , "not demonstrated"
> etc..
> > >
> > > None of my demonstrations are fatal flaws. Some may well be fixable,
> whilst others are just things to note when making a thread choice. But to
> deny them just encourages dwelling on the negatives rather than the
> positives!
> > >
> > > cheers
> > >
> > > --
> > > Greg Wilkins CTO http://webtide.com (
> https://urldefense.com/v3/__http://webtide.com__;!!GqivPVa7Brio!M7gVGdjgN0hBofV52hMhQySIqxgmoHq9HlhG63v-LUCbnq63I7VVbwkfuC6uPVrMnA$
> )
>
>

-- 
Greg Wilkins <gregw at webtide.com> CTO http://webtide.com


More information about the loom-dev mailing list