Return to Project-GC

Welcome to Project-GC Q&A. Ask questions and get answers from other Project-GC users.

If you get a good answer, click the checkbox on the left to select it as the best answer.

Upvote answers or questions that have helped you.

If you don't get clear answers, edit your question to make it clearer.

0 votes
1.3k views
This challenge requires us to find 20 caches that link 5 cachers.  In graph theory terms the cachers are nodes, and the caches that connect them are the edges.

 

Thank you.
in Miscellaneous by pcc322 (360 points)
It is possible in theory but I suspect time limitation will be a problem. In my case all unknown finds for 228 other people has to be downloaded and processed.
I have only 800 unknowns finds and there are many people with more finds.
I might overestimate the time to fetch all finds from the database. Unfortunately there is a bug and I cant access the checker system at all today

2 Answers

0 votes
 
Best answer
Not possible at this time.  The Project-GC API does not allow scripts to see who else has logged a cache.
by SeekerSupreme (5.0k points)
selected by pcc322
Not a problem you "only" have to download all unknown finds for all owners of unknown caches you have logget to get all necessary information.
Just to a PGC_GetFinds on all owners ID:s on unknown the checked cacher has logged. But as my commet abowe i suspect there might be a time problem
Target., you're the best.  Of course that is the solution.  Whether is can be done in 30 seconds of CPU is still to be seen.  Are you working on it, or shall I?  (no need to duplicate work)
The challenge part of pgc only gives me a html error 500 and i cant do anything like that at the moment so you do it.
I see Target and SeekerSupreme are seriously looking at this.  You can see why I was asking, as the manual approach is almost (but not quite) impossible.
I would suspect it is quite easy to find a solution on pgc to that problem. Take the CO with most hides and found of unknows in your area and do a map compare with them all with owned as found.
Add the caches to the vgps and chech is there is a hind from every co there
If you can find caches from all CO you will have found the clicq and the caches. if not remove change the CO constellation and try again
I found that in my area after two tries and I would be surprises it it takes a lot of attempts for you
I found a clicq for you on my second try if I used the top hiders of myst in your county the last two years as a list.
Try to find it your self as a exercise in pgc functions
So no need for a checker then? :-)  Am I off duty?
Wow.  I am a bit sleepy now, so will try to do that in the morning.  I appreciate your work on this.  Clearly you have a lot of expertise in looking at the logic involved in problems.
0 votes

I've written a checker. It does seem to be quite slow - running at around a minute for Target. - but I've improved the efficiency as much as I could find.

http://project-gc.com/Challenges/GC2Y0XJ/14785

by sumbloke (Expert) (35.1k points)
It looks like it works fine.
I found a easy speedup. API accesse are slow if they use the database.
Don't work with owners names but with id. If all calls to PGC_ProfileId2Name() is removes and the id used instead the runtime on me changes from  44 to 7.6 second. Convert in to name in the output in end instead. You will only need 5 api calls

Every time you access a users find you do a PGC_GetFinds() and that is slow
Cache the result in a table and you will get a speedup, code example in the end of this post. I reduced the execution time on me downto 1.3s

Another speedup i did before those it that only consider caches that checked user have found on other caches. Create a associative table with those caches when you process the checked uses caches and add a check if the cache is in that list in the loop over cache owners finds.

That could be be done already when you create the cached version of the owner finds combined with zero the number of finds if the user has less then click size number of find in common with the the checked user
You shll alos remove that user from the list of user but it looks hard in this code or atleast stop all recuve calls for users with to few finds in common

The checker is then fast on most users. It is hard to test because the time depends on the order of your finds and not necessary on the number of finds. That is atlesast true if you get a OK result and i get one on all users i have tested with a log of finds


You have to create
local ownerfinds_caches={}
local userfinds={}
and add the line in the loop of line 31
 userfinds[f['gccode']]=f['gccode']

Code to replace the PGC_GetFinds
if ownerfinds_caches[ownerId]==nil then
        local tmp=PGC_GetFinds(ownerId, { fields = {'gccode', 'owner_id'}, order = 'OLDESTFIRST', filter = cliqueFilter })
          ownerfinds_caches[ownerId]= {}
        
        for _,f in IPairs(tmp) do
            if userfinds[f['gccode']] ~= nil then
                TableInsert(ownerfinds_caches[ownerId],f)
            end
        end
       if #ownerfinds_caches[ownerId]<=cliqueSize then
          ownerfinds_caches[ownerId]={}
       end
      end
      local ownerfinds=ownerfinds_caches[ownerId]
Name/ID: Of course, I'll implement that improvement. I'm not sure what I was thinking yesterday...

Caching finds: I'm already caching them and intended to only do the lookup once per user. I forgot to add one line which set the flag that the user had been checked already.

Restricting the caches checked for owners to just those found by the user being checked: This overly restricts the scope of the checker. The cache description states that you don't have to have found any of the caches which link the owners, they just have to have found some of each other's. I have, however, restricted the checking to only owners the user has found (which I should have done in the first place).

I've implemented these improvements in my dev script. I'll copy them into the live one once I've been able to test (I'm getting script execution errors at the moment on all scripts I try)
I just ran this.  It gave a script execution error.  I was able to manually do the fiddling, but it took about an hour maybe more.  My need is satisfied, but there are a lot of people who would use this, I'm sure.
My mistake with only using finds on caches i have found.
I read the rules again an found a potential problem
>>"Locate a clique consisting of five members and provide a list of the 20 puzzle  caches that bind them together. See the sample log below."

There exist no sample log but only the note that describe a clique
The problem is does "20 puzzle  caches" mean 20 different caches or is it ok with 5 that all have logged/owns?
There is a question in an early note but no answer.
I am not sure if the checker looks for that or not or even if it is nessecary
The question of how many distinct caches are needed is answered ,I think .  dgauss says 20 caches , and I take that as a requirement .  I have to go back and look over my "found" log.  I don't think I would have been able to get this far without the comments here.
I don't think the work is done , but I'm giving it a vote, as I think it deserves it .
He says 20 caches, but there are found logs going back to 2011 with duplicates included. I take that as implicit acceptance of duplicates.

Adding code to remove duplicates would probably increase the runtime to an untenable level, so (unless the CO gives clear direction otherwise) I'd say this checker is sufficient.
Checker has had the improvements implemented and now does Target. in under 10 seconds.
...