[squid-users] Bug: Missing MemObject::storeId value

Heiler Bemerguy heiler.bemerguy at cinbesa.com.br
Mon Sep 25 16:21:26 UTC 2017


I have this since forever.. 3.5.27 with one cache_peer and 4 rockstores

2017/09/21 11:19:45 kid1| Bug: Missing MemObject::storeId value
2017/09/21 11:19:45 kid1| mem_hdr: 0x1902d240 nodes.start() 0x552baa0
2017/09/21 11:19:45 kid1| mem_hdr: 0x1902d240 nodes.finish() 0x552baa0
2017/09/21 11:19:45 kid1| MemObject->start_ping: 0.000000
2017/09/21 11:19:45 kid1| MemObject->inmem_hi: 3335
2017/09/21 11:19:45 kid1| MemObject->inmem_lo: 0
2017/09/21 11:19:45 kid1| MemObject->nclients: 0
2017/09/21 11:19:45 kid1| MemObject->reply: 0xae4da80
2017/09/21 11:19:45 kid1| MemObject->request: 0
2017/09/21 11:19:45 kid1| MemObject->logUri:
2017/09/21 11:19:45 kid1| MemObject->storeId:

2017/09/21 11:19:46 kid1| Bug: Missing MemObject::storeId value
2017/09/21 11:19:46 kid1| mem_hdr: 0x6ce75d0 nodes.start() 0x54585b0
2017/09/21 11:19:46 kid1| mem_hdr: 0x6ce75d0 nodes.finish() 0xb237550
2017/09/21 11:19:46 kid1| MemObject->start_ping: 0.000000
2017/09/21 11:19:46 kid1| MemObject->inmem_hi: 14892
2017/09/21 11:19:46 kid1| MemObject->inmem_lo: 0
2017/09/21 11:19:46 kid1| MemObject->nclients: 0
2017/09/21 11:19:46 kid1| MemObject->reply: 0x9f08d10
2017/09/21 11:19:46 kid1| MemObject->request: 0
2017/09/21 11:19:46 kid1| MemObject->logUri:
2017/09/21 11:19:46 kid1| MemObject->storeId:


Em 22/09/2017 20:18, Aaron Turner escreveu:
> Version: 3.5.26 on CentOS 7.3 on AWS EC2 m3.xlarge and 2x 100GB EBS
> volumes for rock cache.
>
> Doing some basic system tests and we're seeing a bunch of errors like:
>
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.finish() 0x7f169dae4e40
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 20209
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f167ee60db0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
> 2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.start() 0x7f16a6a4a500
> 2017/09/22 22:43:15 kid1| mem_hdr: 0x7f16a0388760 nodes.finish() 0x7f16a6a4a4d0
> 2017/09/22 22:43:15 kid1| MemObject->start_ping: 0.000000
> 2017/09/22 22:43:15 kid1| MemObject->inmem_hi: 50265
> 2017/09/22 22:43:15 kid1| MemObject->inmem_lo: 0
> 2017/09/22 22:43:15 kid1| MemObject->nclients: 0
> 2017/09/22 22:43:15 kid1| MemObject->reply: 0x7f169f83d7d0
> 2017/09/22 22:43:15 kid1| MemObject->request: 0
> 2017/09/22 22:43:15 kid1| MemObject->logUri:
> 2017/09/22 22:43:15 kid1| MemObject->storeId:
>
> I did some googling and seems like a lot of comments about this with
> Rock (we're using) and ICP/HTCP (not using).  Curious if this the same
> bug or something new?  Are there config changes we can make to prevent
> this (perhaps switching away from rock cache??)
>
> We have a bunch of clients behind haproxy which is load balancing to
> 4x Squid.  Config of the squids is as:
>
> http_access allow localhost manager
> http_access deny manager
>
> external_acl_type client_ip_map_0 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 0 4
> external_acl_type client_ip_map_1 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 1 4
> external_acl_type client_ip_map_2 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 2 4
> external_acl_type client_ip_map_3 %>ha{Our-Client}
> /usr/lib64/squid/user_loadbalance.py 3 4
>
> acl client_group_0 external client_ip_map_0
> acl client_group_1 external client_ip_map_1
> acl client_group_2 external client_ip_map_2
> acl client_group_3 external client_ip_map_3
>
> http_access allow client_group_0
> http_access allow client_group_1
> http_access allow client_group_2
> http_access allow client_group_3
> http_access deny all
>
> tcp_outgoing_address 10.93.2.41 client_group_0
> tcp_outgoing_address 10.93.2.76 client_group_1
> tcp_outgoing_address 10.93.2.198 client_group_2
> tcp_outgoing_address 10.93.3.178 client_group_3
>
> cache_dir rock /var/lib/squid/cache1 51200
> cache_dir rock /var/lib/squid/cache2 51200
> coredump_dir /var/spool/squid
> maximum_object_size_in_memory 8 MB
> maximum_object_size 8 MB
>
> cache_mem 6 GB
> memory_cache_shared on
> workers 4
>
> refresh_pattern . 0 100% 30
>
> http_port squid0001:3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=400MB cert=/etc/squid/ssl_cert/myCA.pem
> http_port localhost:3128
> ssl_bump bump all
>
> request_header_access Our-Client deny all
> request_header_access Via deny all
> forwarded_for delete
>
> visible_hostname squid0001.lab.company.com
> logformat adttest %tg %6tr %>a %Ss/%03>Hs %<st %rm %>ru %[un %Sh/%<a %mt %ea
> access_log daemon:/var/log/squid/access.${process_number}.log adttest
> icon_directory /usr/share/squid/icons
>
> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB
> sslcrtd_children 32 startup=2 idle=2
> sslproxy_session_cache_size 100 MB
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
>
>
> --
> Aaron Turner
> https://synfin.net/         Twitter: @synfinatic
> My father once told me that respect for the truth comes close to being
> the basis for all morality.  "Something cannot emerge from nothing,"
> he said.  This is profound thinking if you understand how unstable
> "the truth" can be.  -- Frank Herbert, Dune
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Atenciosamente / Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751



More information about the squid-users mailing list