[squid-users] Whether squid 3.5.2 can support rock at wccp tproxy environment really ? ( there are same problem between smp model or single model )

johnzeng johnzeng2013 at yahoo.com
Mon Mar 9 10:53:56 UTC 2015


于 2015年03月09日 17:28, johnzeng 写道:
>
>> Hello Dear Amos:
>>
>>                              Thanks for your reply ,  i updated my 
>> config in according to your advisement .
>>
>>                              i do more testing for the part .  i face 
>> same problem still , although i understand your saying ( they are 
>> *completely* unrelated. ")
>>
>> but it is real result via more testing still !
>>
>>                               if i disable cache_dir rock part ,and 
>> it will be success for wccp( tproxy) connection
>>
>>                              if i enable cache_dir rock part ,and it 
>> will be failure for wccp( tproxy)
>>
>>                              it is very strange really  ,
>>
>>                              Maybe there are some error  at Cache_dir 
>> rock , and Wccp don't running at error status ,
>>
>>                               but i don't find any error logs info  
>> after running squid .
>>
>>
>>                 this is a config for rock
>>
>>     cache_dir rock /accerater/webcache3/storage/rock1 2646 
>> min-size=4096 max-size=262144 max-swap-rate=250 swap-timeout=350
>>
>>
>> this is status info
>>
>> squid -z
>>
>> 2015/03/09 15:22:45 kid3| Creating Rock db: 
>> /accerater/webcache3/storage/rock1/rock
>>
>>
>> squid -d1
>>
>> root at fastopmizer:/accerater/webcache3/sbin# 2015/03/09 15:23:34 kid3| 
>> Set Current Directory to /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid4| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid3| Starting Squid Cache version 3.5.2 for 
>> x86_64-unknown-linux-gnu...
>> 2015/03/09 15:23:34 kid3| Service Name: squid
>> 2015/03/09 15:23:34 kid4| Starting Squid Cache version 3.5.2 for 
>> x86_64-unknown-linux-gnu...
>> 2015/03/09 15:23:34 kid3| Process ID 12049
>> 2015/03/09 15:23:34 kid2| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid4| Service Name: squid
>> 2015/03/09 15:23:34 kid3| Process Roles: disker
>> 2015/03/09 15:23:34 kid4| Process ID 12048
>> 2015/03/09 15:23:34 kid2| Starting Squid Cache version 3.5.2 for 
>> x86_64-unknown-linux-gnu...
>> 2015/03/09 15:23:34 kid3| With 4096 file descriptors available
>> 2015/03/09 15:23:34 kid2| Service Name: squid
>> 2015/03/09 15:23:34 kid2| Process ID 12050
>> 2015/03/09 15:23:34 kid4| Process Roles: coordinator
>> 2015/03/09 15:23:34 kid3| Initializing IP Cache...
>> 2015/03/09 15:23:34 kid2| Process Roles: worker
>> 2015/03/09 15:23:34 kid4| With 4096 file descriptors available
>> 2015/03/09 15:23:34 kid2| With 4096 file descriptors available
>> 2015/03/09 15:23:34 kid4| Initializing IP Cache...
>> 2015/03/09 15:23:34 kid2| Initializing IP Cache...
>> 2015/03/09 15:23:34 kid3| DNS Socket created at [::], FD 7
>> 2015/03/09 15:23:34 kid3| DNS Socket created at 0.0.0.0, FD 8
>> 2015/03/09 15:23:34 kid4| DNS Socket created at [::], FD 7
>> 2015/03/09 15:23:34 kid4| DNS Socket created at 0.0.0.0, FD 8
>> 2015/03/09 15:23:34 kid3| Adding nameserver 127.0.0.1 from 
>> /etc/resolv.conf
>> 2015/03/09 15:23:34 kid2| DNS Socket created at [::], FD 10
>> 2015/03/09 15:23:34 kid4| Adding nameserver 127.0.0.1 from 
>> /etc/resolv.conf
>> 2015/03/09 15:23:34 kid1| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid2| DNS Socket created at 0.0.0.0, FD 11
>> 2015/03/09 15:23:34 kid2| Adding nameserver 127.0.0.1 from 
>> /etc/resolv.conf
>> 2015/03/09 15:23:34 kid1| Starting Squid Cache version 3.5.2 for 
>> x86_64-unknown-linux-gnu...
>> 2015/03/09 15:23:34 kid1| Service Name: squid
>> 2015/03/09 15:23:34 kid1| Process ID 12051
>> 2015/03/09 15:23:34 kid1| Process Roles: worker
>> 2015/03/09 15:23:34 kid1| With 4096 file descriptors available
>> 2015/03/09 15:23:34 kid1| Initializing IP Cache...
>> 2015/03/09 15:23:34 kid1| DNS Socket created at [::], FD 10
>> 2015/03/09 15:23:34 kid1| DNS Socket created at 0.0.0.0, FD 11
>> 2015/03/09 15:23:34 kid1| Adding nameserver 127.0.0.1 from 
>> /etc/resolv.conf
>> 2015/03/09 15:23:34 kid3| Logfile: opening log 
>> daemon:/accerater/webcache3/var/logs/access.log
>> 2015/03/09 15:23:34 kid4| Logfile: opening log 
>> daemon:/accerater/webcache3/var/logs/access.log
>> 2015/03/09 15:23:34 kid3| Logfile Daemon: opening log 
>> /accerater/webcache3/var/logs/access.log
>>
>> 2015/03/09 15:23:34 kid4| Logfile Daemon: opening log 
>> /accerater/webcache3/var/logs/access.log
>> 2015/03/09 15:23:34 kid2| Logfile: opening log 
>> stdio:/accerater/logs/webcache3/accessb.log
>> 2015/03/09 15:23:34 kid1| Logfile: opening log 
>> stdio:/accerater/logs/webcache3/accessa.log
>> 2015/03/09 15:23:34 kid2| Logfile: opening log 
>> stdio:/accerater/logs/webcache3/storeb.log1
>> 2015/03/09 15:23:34 kid2| WARNING: disk-cache maximum object size is 
>> too large for mem-cache: 102400.00 KB > 90.00 KB
>> 2015/03/09 15:23:34 kid2| Swap maxSize 10444800 + 1024000 KB, 
>> estimated 882215 objects
>> 2015/03/09 15:23:34 kid2| Target number of buckets: 44110
>> 2015/03/09 15:23:34 kid2| Using 65536 Store buckets
>> 2015/03/09 15:23:34 kid2| Max Mem  size: 1024000 KB [shared]
>> 2015/03/09 15:23:34 kid2| Max Swap size: 10444800 KB
>> 2015/03/09 15:23:34 kid1| Logfile: opening log 
>> stdio:/accerater/logs/webcache3/storea.log1
>> 2015/03/09 15:23:34 kid1| WARNING: disk-cache maximum object size is 
>> too large for mem-cache: 102400.00 KB > 90.00 KB
>> 2015/03/09 15:23:34 kid1| Swap maxSize 10444800 + 1024000 KB, 
>> estimated 882215 objects
>> 2015/03/09 15:23:34 kid1| Target number of buckets: 44110
>> 2015/03/09 15:23:34 kid1| Using 65536 Store buckets
>> 2015/03/09 15:23:34 kid1| Max Mem  size: 1024000 KB [shared]
>> 2015/03/09 15:23:34 kid1| Max Swap size: 10444800 KB
>> 2015/03/09 15:23:34 kid2| Rebuilding storage in 
>> /accerater/webcache3/storage/aufs2/2 (dirty log)
>> 2015/03/09 15:23:34 kid2| Using Least Load store dir selection
>> 2015/03/09 15:23:34 kid2| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid1| Rebuilding storage in 
>> /accerater/webcache3/storage/aufs1/1 (dirty log)
>> 2015/03/09 15:23:34 kid1| Using Least Load store dir selection
>> 2015/03/09 15:23:34 kid1| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid3| Store logging disabled
>> 2015/03/09 15:23:34 kid3| Swap maxSize 2709504 + 1024000 KB, 
>> estimated 287192 objects
>> 2015/03/09 15:23:34 kid3| Target number of buckets: 14359
>> 2015/03/09 15:23:34 kid4| Store logging disabled
>> 2015/03/09 15:23:34 kid3| Using 16384 Store buckets
>> 2015/03/09 15:23:34 kid3| Max Mem  size: 1024000 KB [shared]
>> 2015/03/09 15:23:34 kid4| Swap maxSize 0 + 1024000 KB, estimated 
>> 78769 objects
>> 2015/03/09 15:23:34 kid3| Max Swap size: 2709504 KB
>> 2015/03/09 15:23:34 kid4| Target number of buckets: 3938
>> 2015/03/09 15:23:34 kid4| Using 8192 Store buckets
>> 2015/03/09 15:23:34 kid4| Max Mem  size: 1024000 KB [shared]
>> 2015/03/09 15:23:34 kid4| Max Swap size: 0 KB
>> 2015/03/09 15:23:34 kid4| Using Least Load store dir selection
>> 2015/03/09 15:23:34 kid4| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>>
>> 2015/03/09 15:23:34 kid3| Using Least Load store dir selection
>> 2015/03/09 15:23:34 kid3| Set Current Directory to 
>> /accerater/logs/webcache3/opmizer1
>> 2015/03/09 15:23:34 kid1| Finished loading MIME types and icons.
>> 2015/03/09 15:23:34 kid1| HTCP Disabled.
>> 2015/03/09 15:23:34 kid1| Sending SNMP messages from [::]:3401
>> 2015/03/09 15:23:34 kid2| Finished loading MIME types and icons.
>> 2015/03/09 15:23:34 kid2| HTCP Disabled.
>> 2015/03/09 15:23:34 kid2| Sending SNMP messages from [::]:3402
>> 2015/03/09 15:23:34 kid1| Squid plugin modules loaded: 0
>> 2015/03/09 15:23:34 kid1| Adaptation support is off.
>> 2015/03/09 15:23:34 kid2| Squid plugin modules loaded: 0
>> 2015/03/09 15:23:34 kid2| Adaptation support is off.
>> 2015/03/09 15:23:34 kid1| Done reading 
>> /accerater/webcache3/storage/aufs1/1 swaplog (10 entries)
>> 2015/03/09 15:23:34 kid2| Done reading 
>> /accerater/webcache3/storage/aufs2/2 swaplog (13 entries)
>> 2015/03/09 15:23:34 kid4| Finished loading MIME types and icons.
>> 2015/03/09 15:23:34 kid3| Finished loading MIME types and icons.
>> 2015/03/09 15:23:34 kid4| Accepting WCCPv2 messages on port 2048, FD 11.
>> 2015/03/09 15:23:34 kid4| Initialising all WCCPv2 lists
>> 2015/03/09 15:23:34 kid3| Squid plugin modules loaded: 0
>> 2015/03/09 15:23:34 kid3| Adaptation support is off.
>> 2015/03/09 15:23:34 kid3| Loading cache_dir #0 from 
>> /accerater/webcache3/storage/rock1/rock
>> 2015/03/09 15:23:34 kid4| Squid plugin modules loaded: 0
>> 2015/03/09 15:23:34 kid4| Adaptation support is off.
>> 2015/03/09 15:23:34 kid3| Store rebuilding is 0.59% complete
>> 2015/03/09 15:23:35 kid1| Finished rebuilding storage from disk.
>> 2015/03/09 15:23:35 kid1|        10 Entries scanned
>> 2015/03/09 15:23:35 kid1|         0 Invalid entries.
>> 2015/03/09 15:23:35 kid1|         0 With invalid flags.
>> 2015/03/09 15:23:35 kid1|        10 Objects loaded.
>> 2015/03/09 15:23:35 kid1|         0 Objects expired.
>> 2015/03/09 15:23:35 kid1|         0 Objects cancelled.
>> 2015/03/09 15:23:35 kid1|         0 Duplicate URLs purged.
>> 2015/03/09 15:23:35 kid1|         0 Swapfile clashes avoided.
>> 2015/03/09 15:23:35 kid1|   Took 1.01 seconds (  9.88 objects/sec).
>> 2015/03/09 15:23:35 kid2| Finished rebuilding storage from disk.
>> 2015/03/09 15:23:35 kid1| Beginning Validation Procedure
>> 2015/03/09 15:23:35 kid2|        13 Entries scanned
>> 2015/03/09 15:23:35 kid2|         0 Invalid entries.
>>
>> 2015/03/09 15:23:35 kid2|         0 With invalid flags.
>> 2015/03/09 15:23:35 kid2|        13 Objects loaded.
>> 2015/03/09 15:23:35 kid2|         0 Objects expired.
>> 2015/03/09 15:23:35 kid2|         0 Objects cancelled.
>> 2015/03/09 15:23:35 kid2|         0 Duplicate URLs purged.
>> 2015/03/09 15:23:35 kid2|         0 Swapfile clashes avoided.
>> 2015/03/09 15:23:35 kid2|   Took 1.01 seconds ( 12.84 objects/sec).
>> 2015/03/09 15:23:35 kid2| Beginning Validation Procedure
>> 2015/03/09 15:23:35 kid1|   Completed Validation Procedure
>> 2015/03/09 15:23:35 kid1|   Validated 10 Entries
>> 2015/03/09 15:23:35 kid1|   store_swap_size = 8840.00 KB
>> 2015/03/09 15:23:35 kid1| Accepting HTTP Socket connections at 
>> local=127.0.0.1:3220 remote=[::] FD 26 flags=1
>> 2015/03/09 15:23:35 kid1| Accepting TPROXY intercepted HTTP Socket 
>> connections at local=[::]:3221 remote=[::] FD 27 flags=17
>> 2015/03/09 15:23:35 kid1| Accepting SNMP messages on [::]:3401
>> 2015/03/09 15:23:35 kid2|   Completed Validation Procedure
>> 2015/03/09 15:23:35 kid2|   Validated 13 Entries
>> 2015/03/09 15:23:35 kid2|   store_swap_size = 16852.00 KB
>> 2015/03/09 15:23:35 kid2| Accepting HTTP Socket connections at 
>> local=127.0.0.1:3220 remote=[::] FD 26 flags=1
>> 2015/03/09 15:23:35 kid2| Accepting TPROXY intercepted HTTP Socket 
>> connections at local=[::]:3221 remote=[::] FD 27 flags=17
>> 2015/03/09 15:23:35 kid2| Accepting SNMP messages on [::]:3402
>> 2015/03/09 15:23:36 kid3| Finished rebuilding storage from disk.
>> 2015/03/09 15:23:36 kid3|    169343 Entries scanned
>> 2015/03/09 15:23:36 kid3|         0 Invalid entries.
>> 2015/03/09 15:23:36 kid3|         0 With invalid flags.
>> 2015/03/09 15:23:36 kid3|         0 Objects loaded.
>> 2015/03/09 15:23:36 kid3|         0 Objects expired.
>> 2015/03/09 15:23:36 kid3|         0 Objects cancelled.
>> 2015/03/09 15:23:36 kid3|         0 Duplicate URLs purged.
>> 2015/03/09 15:23:36 kid3|         0 Swapfile clashes avoided.
>> 2015/03/09 15:23:36 kid3|   Took 1.40 seconds (  0.00 objects/sec).
>> 2015/03/09 15:23:36 kid3| Beginning Validation Procedure
>> 2015/03/09 15:23:36 kid3|   Completed Validation Procedure
>> 2015/03/09 15:23:36 kid3|   Validated 0 Entries
>> 2015/03/09 15:23:36 kid3|   store_swap_size = 16.00 KB
>> 2015/03/09 15:23:36 kid3| storeLateRelease: released 0 objects
>> 2015/03/09 15:23:36 kid1| storeLateRelease: released 0 objects
>> 2015/03/09 15:23:36 kid2| storeLateRelease: released 0 objects
>>
>>
>>
>>
>>
>>
>> 于 2015年03月09日 13:01, Amos Jeffries 写道:
>>> On 9/03/2015 4:38 p.m., johnzeng wrote:
>>>>
>>>> Hello Dear All :
>>>>
>>>> I face a problem recently , When i config wccp ( tproxy ) 
>>>> environment (
>>>> via using squid 3.5.2 ) ,
>>>>
>>>> if i disable cache_dir rock part ,and it will be success for wccp(
>>>> tproxy) , and enable cache_dir aufs
>>>>
>>>> #cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
>>>> max-size=262144 max-swap-rate=250 swap-timeout=350
>>>>
>>>> but if i enable cache_dir rock part ,and it will be failure for wccp(
>>>> tproxy) and enable cache_dir aufs
>>>>
>>>> cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
>>>> max-size=262144 max-swap-rate=250 swap-timeout=350
>>>>
>>>>
>>>> Whether some of my config is error , if possible , please give me some
>>>> advisement
>>>>
>>> For starters,
>>>   WCCP is a network protocol Squid uses to inform remote routers 
>>> that it
>>> is active and what traffic it can receive.
>>>   rock is a layout format for bits stored on a disk.
>>>   ... they are *completely* unrelated.
>>>
>>>
>>>
>>>> This is my config
>>>>
>>>>
>>>> thanks
>>>>
>>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 
>>>>
>>>>
>>>> coredump_dir /accerater/logs/webcache3/
>>>> unlinkd_program /accerater/webcache3/libexec/unlinkd
>>>> pid_filename /accerater/logs/webcache3/opmizer1/cache.pid
>>>>
>>>>
>>>> workers 2
>>>> cpu_affinity_map process_numbers=1,2 cores=1,3
>>>>
>>>> cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
>>>> max-size=262144 max-swap-rate=250 swap-timeout=350
>>>>
>>> You are telling Squid to start two controllers to the database file
>>> /accerater/webcache3/storage/rock1 from *each* worker. There is zero
>>> benefit from this and the two controllers may enounter collisions as
>>> they compete for acces to the disk without sharing atomic locks. That
>>> leads to cache corruption.
>>>
>>> Remove one of those two lines.
>>>
>>>
>>>> if ${process_number} = 1
>>>>
>>>> cache_swap_state /accerater/logs/webcache3/opmizer1_swap_log1
>>> Dont use cache_swap_state.
>>>
>>>> access_log stdio:/accerater/logs/webcache3/opmizer1_access.log squid
>>> Use this instead (mind the wrap):
>>>
>>> access_log
>>> stdio:/accerater/logs/webcache/opmizer${process_number}_access.log 
>>> squid
>>>
>>>> cache_log /accerater/logs/webcache3/opmizer1_cache.log
>>>
>>> Use this instead:
>>>
>>> cache_log /accerater/logs/webcache3/opmizer${process_number}_cache.log
>>>
>>>> cache_store_log stdio:/accerater/logs/webcache3/opmizer1_store.log
>>> You should not need cache_store_log at all.
>>>
>>> Either remove it or use this instead (mind the wrap):
>>>
>>> cache_store_log
>>> stdio:/accerater/logs/webcache3/opmizer${process_number}_store.log
>>>
>>>
>>>> url_rewrite_program /accerater/webcache3/media/mediatool/media2
>>>> store_id_program /accerater/webcache3/media/mediatool/media1
>>> Why do you have different binary executable names for the two workers
>>> helpers?
>>>
>>> If they are actually different, then the traffic will have different
>>> things applied randomly depending on which worker happened to accept 
>>> the
>>> TCP connection. If they are the same, then you only need to define them
>>> once and workers will start their own sets as needed.
>>>
>>>
>>>> unique_hostname fast_opmizer1
>>>> snmp_port 3401
>>> Use this instead:
>>>
>>>   unique_hostname fast_opmizer${process_number}
>>>   snmp_port 340${process_number}
>>>
>>>
>>> All of the above details can move up out of the per-worker area.
>>>
>>>
>>>> #cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
>>>> max-size=262144 max-swap-rate=250 swap-timeout=350
>>>>
>>>> cache_dir aufs /accerater/webcache3/storage/aufs1/${process_number} 
>>>> 5200
>>>> 16 64 min-size=262145
>>>>
>>>> else
>>>>
>>>> #endif
>>>>
>>>>
>>>> if ${process_number} = 2
>>>>
>>>>
>>>> cache_swap_state /accerater/logs/webcache3/opmizer2_swap_log
>>>> access_log stdio:/accerater/logs/webcache3/opmizer2_access.log squid
>>>> cache_log /accerater/logs/webcache3/opmizer2_cache.log
>>>> cache_store_log stdio:/accerater/logs/webcache3/opmizer2_store.log
>>>> url_rewrite_program /accerater/webcache3/media/mediatool/media4
>>>> store_id_program /accerater/webcache3/media/mediatool/media3
>>>> unique_hostname fast_opmizer2
>>>> snmp_port 3402
>>>>
>>> Same notes as for worker 1.
>>>
>>>
>>>> #cache_dir rock /accerater/webcache3/storage/rock2 2646 min-size=4096
>>>> max-size=262144 max-swap-rate=250 swap-timeout=350
>>>>
>>>> cache_dir aufs /accerater/webcache3/storage/aufs1/${process_number} 
>>>> 5200
>>>> 16 64 min-size=262145
>>>>
>>>> endif
>>>>
>>>> endif
>>>>
>>>>
>>>>
>>>> http_port 127.0.0.1:3220
>>>> http_port 3221 tproxy
>>>>
>>>> wccp_version 2
>>>> wccp2_router 192.168.2.1
>>>> wccp2_forwarding_method 1
>>>> wccp2_return_method 1
>>>> wccp2_assignment_method 1
>>>> wccp2_service dynamic 80
>>>> wccp2_service dynamic 90
>>>> wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 
>>>> ports=80
>>>> wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
>>>> priority=240 ports=80
>>> Both workers are telling the WCCP router in different packets that they
>>> are available on the same IP:port. In theory that should work fine,
>>> since the router is just getting twice as many updates as it needs to
>>> keep the proxy registered. In practice some people are finding that
>>> certain routers cant cope with the extra registration operations.
>>>
>>>
>>>> tcp_outgoing_address 192.168.2.2
>>>>
>>> Be aware that TPROXY spoof the client outgong address. This line has no
>>> effect on the TPROXY intercepted traffic. Only the traffic received on
>>> port 3220 will be using this outgoing address.
>>>   Your WCCP rules need to account for that by not depending on the 
>>> packet
>>> IP addresses.
>>>
>>>
>>> Amos
>>
>



More information about the squid-users mailing list