feat: add rate-limiter and test #3
No reviewers
Labels
No labels
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
geomind_code/geoip!3
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "tests-refactoring"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
#1
@salmaelsoly
rate limit should be per ip.
We could have a lot of requests that are legitimate, and then the rate limit would be some number we need to tune and adapt in function of how big our universe gets.
rate limiting to a few request per ip per 10 seconds would then be a lot more in line with ratelimiting some bad actor
feat: add rate-limiter and testto WIP: feat: add rate-limiter and test@delandtj wrote in #3 (comment):
Hi!, Just wanted to point out that if you check rate_limiter.rs, you'll notice the rate limiting is indeed already per ip each client gets their own sliding window counter keyed by their ip address. So legitimate traffic from different ips won't affect each other.
WIP: feat: add rate-limiter and testto feat: add rate-limiter and test@ -0,0 +48,4 @@let result: geoip2::City = lookup.decode().map_err(|e| LookupError::Database(e.to_string()))?.unwrap_or_default();is it okay to use default here?
it was okay since i check on the none fields of struct after that however i changed to directly return error
@ -0,0 +81,4 @@return ip_str.parse().map_err(|e| format!("invalid IP: {e}"));}if let Some(header_val) = headers.get("X-Real-IP") {X-Real-IP is just an HTTP header. Any user can set it to anything
it is used when running behind a reverse proxy, should we remove it?
@ -0,0 +38,4 @@}pub fn check(&self, key: &str) -> CheckResult {let mut counters = self.counters.lock().unwrap_or_else(|e| e.into_inner());we can log warning here
.unwrap_or_else(|e| { tracing::error!("rate limiter mutex poisoned, recovering"); e.into_inner() })to not silently recover@ -0,0 +53,4 @@let elapsed = now.duration_since(counter.window_start);if elapsed >= self.window_duration {let windows_passed = elapsed.as_secs() / self.window_duration.as_secs();Integer division truncates windows_passed, some case we could be almost 2 windows and because of it the windows_passed will handle it as one window
we should edit window start to be
counter.window_start += self.window_duration * windows_passed;@ -0,0 +61,4 @@} else {counter.previous_count = 0;counter.current_count = 0;counter.window_start = now;here too, this means the new window starts at an arbitrary time (whenever this request happened to arrive), not at a consistent aligned boundary.
@ -0,0 +88,4 @@}pub fn cleanup(&self) {let mut counters = self.counters.lock().unwrap_or_else(|e| e.into_inner());we can log warning here
.unwrap_or_else(|e| { tracing::error!("rate limiter mutex poisoned, recovering"); e.into_inner() })to not silently recover@ -0,0 +126,4 @@("X-RateLimit-Limit", max_requests),("X-RateLimit-Window", window_seconds.clone()),("X-RateLimit-Remaining", "0".to_string()),("Retry-After", window_seconds),this makes the user always wait the same window seconds and is supposed to tell the user ( how many seconds to wait before retrying - should wait only the remaining time in window