Re: Reinventing Text and NICs
Rolf Andersson / Pantor Engineering
22 Jul 2008 2:45PM ET
It's Pantor not panter.
So, retired, what's your interest in this?
Sorry, you haven't (yet) convinced me that you are not a troll.
> > I'm not sure if this is flame-bait, but I will answer anyway.
> Not at all. And response is appreciated.
> > 1. Who designed this mess? I did, together with a number of people
> > within the mdowg (this is documented by some of the reports
> > available on the FPL website as well as the credits section on the
> > FAST page)
> Right, so what was the primary motive doing this?
> Bandwidth reduction, better recovery, less latent processing rates,
> better throughput etc. And all of that coming from adaptation for
> streaming bits rather than bytes? Yes, that is all there is to it. And
> no, it doesn't do the job, ask the pros as Real Software and the hype
> their G2 caused.
> None of it justifies any of the complexity or has any technical value in
> my opinion.
> Why? Because it is already done by hardware and I would recommend
> talking to a few network card manufacturers and optical providers.
> > 2. Why wasn’t the problem tackled with something other than usual
> > mixing up of bitwise-logic with schema and XML bloat? There is no
> > mixing. You must have confused the wire representation with the
> > specification notation. Read the specs.
> I don't think I have confused anything.
> The wire representation is doing what NICs are doing and better.
> FAST as a state machine is a classic flaw. Stop-bit and presence maps do
> not enhance anything existing network infrastructure or application
> protocols that are done right cannot already do.
> XML is used to express templates, flaky schemas (another topic too
> long to bother with), and desire to repeat the same old FIX mistake
> where everybody tacks on whatever they like (and it varies between
> exchanges to such a silly extent), it defines no protocol at all, but
> fields and values.
> Not just providing flaky and breaking semantics, XML as a template and
> desciribing a protocol is nothing that XML didn't do for the past 11
> years of it, SGML did it too for much longer.
> Even schema driven bitwise encoding and decoding code generation is
> nothing new, it exists since 1970s.
> Perhaps you're too young to remember Baudot code and where stop-bit like
> ideas comes from, and the same applies for 'presence map' which has a
> long computing history (basic Logic in computing) and even in relatively
> new (a decade or more) applications in Semantic Modelling.
> >3. And if it just works and FAST is the-end-of-it-all, why do vendors
> > still provide proprietary interface to their data? Inertia?
> > Different goal functions? Your hypothesis ...?
> If it was intertia they wouldn't invest in their feeds would they?
> FAST has been around for far too long without any concrete proof
> beyond 'look Ma, I can compress and publish numbers'. Compared to what
> other than FIX?
> You should look up those players and see why they did it (hint:
> hard data).
> If it was different goal functions, what makes anything FIX or
> FASTFix better?
> I have no hypothesis, I just see that both CME and Eurex ,the drivers of
> this process would rather keep:
> a) reselling bandwidth instead of tackling the problem of ancient and
> cobol-like legacy
> b) FIX keeps missing basic design goals and everybody interprets it and
> extends it their own way
> c) It is proven to be inefficient for market feeds; worldwide, inter-
> galactically, since inception.
> d) It is a maintenance nightmare that brought very little benefit and
> huge runtime penalties of Java disciples
> > A few comments on your observations:
> > c1) "Usual design-by-committee"
> I don't agree.
> >We have been careful not to do the "usual design-by-committee" Please
> >provide a relevant example of why you believe we have exercised a
> >"usual design-by-committee"
> No reference implementation and valid technical reason or data or any
> substantiated (not manufactured) samples why existing infrastructure
> with far more simple approaches cannot do the job better or in less
> complicated fashion.
> Remember VHS vs BetaMax? Sure, it means nothing, FAST could be a winner,
> but then came a CD-ROM etc.
> If you can provide a set of samples, I will provide proof FAST does not
> gain anything more than 3% over existing methods done in hardware and
> very little work done in app-level space. And that 3% will come at a
> huge cost that can be put to better use elsewhere.
> Reading networking hardware manuals and looking at their chip designs
> should do the job as well as looking at what non-FIX major players did.
> > c2) "no reference implementation for 1.1 (what a grand idea)" We
> > decided no to provide a reference impl. Sofar no-one has contributed
> > _any_ full open impl. I agree that this is a problem.
> It is more than a problem. There is nothing out there that did well,
> even if it had a great reason to exist, because of lack of
> implementation so the vapour-compared-against-raw-FIX and supposed
> benefits can really be measured, and most importantly, measured against
> The key is hard data to prove your ideas and concepts (which are not new
> anyway) and against what can be done. Not assumptions, and not the off-the-
> tangent benchmarks.
> It is not a flame at all, it is requesting a justification in samples
> so we can see for ourselves whether this makes any sense or not (hint:
> it doesn't).
> > c3) "Big exchanges taking it on and interpreting it own way" This is
> > indeed a problem.
> I can agree it is a classic FIX approach problem and not just
> FAST, and..
> > Some of the exchanges have been careful to get feedback and a second
> > opinion from the mdowg, while others have largely ignored this
> > opportunity.
> Those same exchanges should focus on what does provide more value for
> everyone not just their interest.
> They should enhance their designs not to redistribute redundant things
> such as depth, gruesome hacks on snapshots and rewinds, require silly
> private lines and VPNs and huge pipes for their idiotic designs and
> plenty more.
> But no, there is more interest in bolting another layer that will not
> address the problem better than a network card, they would rather mix-up
> application and network concepts.
> > A question to you regarding your second question:
> > - How would you have tackled the problem?
> Simple. For big players that adopted this totally unnecessary process
> and hype, why not measure and test against an implementation that
> distributes what is required and with classic or streaming compression
> (there are stacks of implementations out there).
> That way people don't pay for exchanges mark-up on bandwith (revenue
> before actually providing any service), and their lack of interest in
> fixing the problem that lead to FAST.
> That way we get to see what the tangible benefits are, not hypothesis or
> designs for the sake of design and show off that leads to no advance.
> > And, please identify your company affiliation.
> Retired. And after 30 years of dealing with the same old thing, seeing
> the same thing repeat all over again.
> Why not just FIX the problem, the FIX itself is within those exchanges
> and their infrastructure implementation.
> It has been and it still is dead obvious even to fresh-out-of-college or
> networking graduates screaming "bitwise, tagging and XML horror ahead"
> (ie. story for my son and his friends)