Is it some kind of a Monte Carlo simulation against a ruleset over a data space?
On Mon, Oct 24, 2022 at 10:41:26AM -0700, Charles Polisher wrote:
On 10/24/22 09:56, Gary wrote:
I don't think I understand. What are you
trying to do?
Are you looking at performace or load balancing?
-Gary
Consistency and correctness. Firewall rules can be self-
inconsistent. Hand-audits of firewalls often show lots
of mistakes, for example this simple "shadowing":
target prot source destination
ACCEPT tcp 10.2.0.0/16 192.168.1.200 ctstate NEW tcp dpt:53 /* shadows
next rule */
ACCEPT tcp 10.2.0.1 192.168.1.200 ctstate NEW tcp dpt:53 /*
shadowed */
That second rule can't match anything, because the first rule
"shadows" it, that is matches on an equal or larger address
region. Lots of hard to analyze variations of this exist. Besides
shadowing, there are hard to suss-out classes of mistakes, like
rules that call out addresses for hosts that no longer exist,
or failure to admit packets for running services from all needed
subnets. I worked on a firewall that had a typical big ball o'
mud ruleset that had been hacked on for years -- everyone was
afraid to make changes because it was terrifically hard to tell
what the outcome would be.
_______________________________________________
Lug-nuts mailing list -- lug-nuts(a)bigbrie.com
To unsubscribe send an email to lug-nuts-leave(a)bigbrie.com