<div dir="ltr"><div class="gmail_default" style="font-family:"courier new",monospace;font-size:small">The big difference is the single point of failure.<br><br>With a host doing that redirection, We will have load balancing and different boxes(physically different) running multiple instances.<br>But We will still suffer from single-point-of-failure.<br><br>To solve that, We will need another layer of redundancy of that redirector layer.<br>And will be necessary at least two other machines. Or even more, depending on how strong is the need for redundancy.<br>More things to deal with...<br><br><br><br>My idea is to join Redundancy and Load-Balancing on the same layer solution.</div><div class="gmail_default" style="font-family:courier new,monospace;font-size:small"><br>Exemplifying the concept:<br>A scenario of an IXP with 2000 Participants, 15 Facilities been 5 of those designed for Computing Resources.<br>(Let's consider, just to simplify the example, just one Route-Server. To have two Route-Servers, just double the recipe.)<br>- Slice the 2000 Peers in 5 Resource Pools, according to the facilities where those peers are connected.<br>- 7 Route-Servers, been 5 of then the primary of each resource pool, and the other two being the secondary and tertiary failover.<br>- All the route-server with the same Route-Polices and Peers configuration provisioned by a central CI/CD.<br>- Adjust Heart-Beat to deal with those resource pools.<br><br>In this scenario, on an event where a facility(in Brazil the common name is PIX) became isolated from the rest of the Mesh of the rest of the IXP Lan, those participants in that facility will still be exchanging routes with each other.<br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em qua., 20 de jan. de 2021 às 14:25, Alexander Zubkov <<a href="mailto:green@qrator.net">green@qrator.net</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
Thank you for the link, I looked at the presentation. It looks like<br>
what I thought it was (and you explained it the same way) - balancing<br>
connections to multiple Bird instances. So I am still a little<br>
confused about the features you want Bird to support in relation to<br>
that. Because most of the "magic" you want - like HA on L2/L3, running<br>
on different machines etc, can still be done by other networking tools<br>
and configuration. And if you have some real case, with that you have<br>
problems - it looks quite interesting to me and I could try to help<br>
you with the configuration if you wish and maybe we could even make<br>
some useful case of it for other bird users.<br>
<br>
On Tue, Jan 19, 2021 at 10:01 PM Douglas Fischer<br>
<<a href="mailto:fischerdouglas@gmail.com" target="_blank">fischerdouglas@gmail.com</a>> wrote:<br>
><br>
> Vertical Scalability of Route-Servers on very large IXP is a challenge!<br>
> We are talking about 400-2200 peers...<br>
> <a href="https://ixpdb.euro-ix.net/en/ixpdb/ixps/?sort=participants&reverse=1&" rel="noreferrer" target="_blank">https://ixpdb.euro-ix.net/en/ixpdb/ixps/?sort=participants&reverse=1&</a><br>
><br>
> As already mentioned, Bird still does not deal very well with multi-threading(even on version 2).<br>
> So, for that, threads with more gigahertz are better de several threads.<br>
><br>
> In Bird's world, the solution for that is the MultiBird.<br>
> That solution is explained here:<br>
> -> <a href="https://www.euro-ix.net/media/filer_public/40/8b/408bd0bb-6835-4807-8677-0a1961bd3fba/flock-of-birds_ljtmypd.pdf" rel="noreferrer" target="_blank">https://www.euro-ix.net/media/filer_public/40/8b/408bd0bb-6835-4807-8677-0a1961bd3fba/flock-of-birds_ljtmypd.pdf</a><br>
> -> <a href="https://www.youtube.com/watch?v=dwRwF7Bu8as" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=dwRwF7Bu8as</a><br>
> In pt_BR, but I believe that if you activate the automatic subtitles and automatic translation to your language will be enough to understand.<br>
><br>
><br>
> What I'm proposing here is just a different method of doing multiple instances of Bird, with the possibility of those being on diferente boxes, or even different sites.<br>
><br>
><br>
><br>
> Em ter., 19 de jan. de 2021 às 12:22, Alexander Zubkov <<a href="mailto:green@qrator.net" target="_blank">green@qrator.net</a>> escreveu:<br>
>><br>
>> But you wrote that for scaling there are load balancers to balance<br>
>> sessions among different bird instances. So VRRP + Load Balancer will<br>
>> give you what you want. You can also try to bind several birds to a<br>
>> single address in linux (probably little patchin is required to set<br>
>> socket options) and linux will balance sessions between them. You may<br>
>> also want to exchange routing information somehow between your bird<br>
>> instances, but I think it also can be solved somehow, a couple of<br>
>> route reflectors for example.<br>
>> I still do not understand what you want to see in Bird itself. I<br>
>> haven't run large IXPs, so I may be not aware of something and would<br>
>> be glad if you explained it in more detail.<br>
>><br>
>> On Tue, Jan 19, 2021 at 3:22 PM Douglas Fischer<br>
>> <<a href="mailto:fischerdouglas@gmail.com" target="_blank">fischerdouglas@gmail.com</a>> wrote:<br>
>> ><br>
>> > As I mentioned initially, my focus was on "large environments of IXPs".<br>
>> > Considering that, L3 anycast does not apply very well to that scenario.<br>
>> > (I don't know any IXPs that use Route-Servers outside of the MPLA-LAN of the IXP.)<br>
>> ><br>
>> > Using VRRP is an excellent method to provide fail-over on L2.<br>
>> > (I used it a lot on several application scenarios).<br>
>> > But it does not provide load-balancing, just fail-over.<br>
>> ><br>
>> > Considering "large environments of IXPs", and the fact that even on Bird 2, the multi-thread limitation is not completely solved.<br>
>> > The solution for that is Load-Balance. MultiBird does it VERY WELL.<br>
>> > But until now we(at least me) have seen only "single-host" based solutions, using nat/forwarding connections.<br>
>> ><br>
>> > With this suggestion, using L2 load-balancing based on MAC-IP-Mapping manipulations, is possible to remove the "single-host" point of failure.<br>
>> ><br>
>> > Em ter., 19 de jan. de 2021 às 10:48, Alexander Zubkov <<a href="mailto:green@qrator.net" target="_blank">green@qrator.net</a>> escreveu:<br>
>> >><br>
>> >> Hi,<br>
>> >><br>
>> >> You can use VRRP or alike protocol on L2 or dynamic routing with<br>
>> >> anycast on L3 for reliability. I do not see what you want in Bird.<br>
>> >> Could you explain more?<br>
>> >><br>
>> >> On Tue, Jan 19, 2021 at 1:26 PM Douglas Fischer<br>
>> >> <<a href="mailto:fischerdouglas@gmail.com" target="_blank">fischerdouglas@gmail.com</a>> wrote:<br>
>> >> ><br>
>> >> > I was studying the concepts of multi-bird for large environments of IXPs.<br>
>> >> ><br>
>> >> > And, beyond the extra complexity that it brings to the environment, one of the weak points I saw was the fact that all the Bird instances are at the same box(vm, container, etc...).<br>
>> >> ><br>
>> >> > A friend mentioned that some tests were made with a LoadBalancer redirecting the post-nated connections to other boxes.<br>
>> >> > But even in that scenario, that load balancer would be a single-point-of-failure/bottleneck.<br>
>> >> ><br>
>> >> > So I was remembering Cisco GLBP and Heart-Beat protocol.<br>
>> >> > Those protocols inform different Mac-Addresses to the same IPv4/IPv6 Address, based on the source of the ARP/ND query.<br>
>> >> > Making a load-balance/fail-over based on the glue between layer2 and layer3.<br>
>> >> > P.S.: Several scenarios uses that concept. Corosync, Windows Cluster, Orale RAC, etc...<br>
>> >> ><br>
>> >> > Considering that concept, and joining it with multibird:<br>
>> >> > Would be possible to create groups of sources and assigning different priorities to those groups on each instance of Bird.<br>
>> >> > In this case, each Bird instance could run on a different box, or even on a different site.<br>
>> >> ><br>
>> >> > Further than that, on IXPs with a large number of participants, would be possible to define some affinity between that group of priority based for example on the facility where those participants are connected.<br>
>> >> ><br>
>> >> > I have a feeling that this would be especially useful for remote peering scenarios.<br>
>> >> ><br>
>> >> ><br>
>> >> > Just a crazy idea to share with colleagues.<br>
>> >> > Maybe from here, some good thing could rise.<br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > Douglas Fernando Fischer<br>
>> >> > Engº de Controle e Automação<br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Douglas Fernando Fischer<br>
>> > Engº de Controle e Automação<br>
><br>
><br>
><br>
> --<br>
> Douglas Fernando Fischer<br>
> Engº de Controle e Automação<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><font size="2"><span style="font-family:"courier new",monospace">Douglas Fernando Fischer</span><br style="font-family:"courier new",monospace"><span style="font-family:"courier new",monospace">Engº de Controle e Automação</span></font><div style="padding:0px;margin-left:0px;margin-top:0px;overflow:hidden;color:black;text-align:left;line-height:130%;font-family:"courier new",monospace"></div></div>