Let’s define an array RES, where RES[i][j] represents the total number of eligible pair of strings which starts at index i and ends at index j. So at any time we will be able to answer any query in O(1).

We need to build a data structure such that given any string s it can tell us how many inserted strings are already there which match with string S in O(maximum length of S) complexity. We can do this using trie data structure. For each of the suffix of a string we must build a trie.

If you are not familiar with trie then please check this links:

Now sort all the names in increasing order of time and iterate through one by one. When a person enters the party we will insert his name in that data structure, and when he leaves we will delete him again.

Now let’s see how we can handle this insert and delete operation.

Let’s say a person S has entered the party. Now for each of the substring S(u,v) of string S insert this substring in TRIE[u].
Here u is the starting position and v is the ending position of substring S(u,v).

Now how this new insertion change the RES[u][v]?
Let’s take K = the number of string in TRIE[u] which match with substring S(u,v) before inserting it. So C(K,2) was included in RES[u][v] for this particular substring.
Here C(K,2) = no of ways to select any pair of items from K items.
But after the insertion of it the new count will be (K+1), so the new result is supposed to be C(K+1,2) now.
So to adjust the result remove C(K,2) and add C(K+1,2) in RES[u][v]. Which is RES[u][v] = RES[u][v] - C(K,2) + C(K,+1,2).

The deleting process is similar to the insertion. As we are deleting now the new value of K will be K-1. We handle the other things accordingly.

Complexity: We can iterate through each of the suffix in O(length of the suffix) and then insert or delete each of the prefix of that suffix in O(length of the suffix). Thus we will be able to insert or delete all the sub-string of a string in O(N(length of the suffix)^2)* complexity.

Note: We could also handle the same operations using hashing and unordered_map. As unordered_map is supposed to work in O(1). So the complexity would be same. But unordered_map works very slowly when there is huge amount of data. So this solution wouldn’t pass even in 3 seconds.

Judge Solution:

Alternate Writer: Labib Md Rashid


69% Solution Ratio
imranziadEarliest, Jan '17
AlveRahmanFastest, 0.3s
MorassLightest, 16 MB
Deshi_TouristShortest, 2022B
Toph uses cookies. By continuing you agree to our Cookie Policy.