Programmig with passion

### 557C – Arthur and Table (Codeforces)

The problem statement can be found here.

Problem:
Given the length and the energy to remove a set of legs from a table, we are to balance this table in such a way that we minimize the amount of energy spent. A table is balanced when the amount of legs of maximum size is greater than half of the total amount of legs. For example, a table with 5 legs of lengths, 3 1 4 4 4, will be balanced if:

• We remove legs 1 and 3 (we keep legs of size 4). or
• We remove legs of length 3 and 4 (we keep leg of size 1). or
• We remove legs of length 1 and 4 (we keep leg of size 3).

Solution:
We will use a simple algorithm to solve this problem: we will first group the legs by length. Then, for each one of these groups, we will assume it is the one that will balance the table. We will then proceed to remove legs outside of the group in such a way that we minimize the energy spent. Finally, over all the groups, we pick the one that yields the lowest energy expenditure.

Now what? Well, we still need to figure out the details of the implementation. To group the legs, we can sort them by length in non-decreasing order. Let’s let $g_i$ be the group that is being currently processed and $k$ the size of such group (the number of legs with the same length). Because we are assuming that $g_i$ is the group that will balance the table, we need to remove all the legs that have length greater than it. This can be done in constant time by precomputing the sum from right to left after having sorted the array. As for the legs with length less than $g_i$, we need to keep a total of $k-1$ legs, such that the energy removing the other legs is minimal. Every time we finish dealing with a group of legs, we can add them to a sorted list (multiset), so that it is easier to select which legs should be unscrewed. Let’s call this list $s$. Now, there is two ways we can select the optimal subset:

Select legs with minimal energy to unscrew: (TLE response)
We can traverse $s$ from left to right (assuming it is sorted in non-decreasing order), and add up the first $min(0, s.length - (k-1))$ elements. The problem with this approach is that we can go up to quadratic complexity, because we have iterate over all the legs that are to be removed. As an example, imagine a list of leg lengths that are all different, i.e. each group will only have one element. For each group (each leg in this case), we will then have to iterate over $s.length$ legs and add the energies up. After a group (a leg) is dealt with, we will add this leg to $s$. This clearly suggests that we will approach quadratic complexity (which TLEs). Which leads us to the next approach…

Select legs with maximal energy and unscrew the rest: (AC response)
We can use a similar idea, and, helped by keeping a sum of the elements added to $s$, we can instead select the legs with maximal energy. To this end, we will have $s$ be a non-increasing sorted array. This approach improves the complexity because we will only have to iterate through at most $k-1$ elements of $s$ every time. Clearly, this approach is not quadratic. In this case, the total energy spent by removing legs to the left of $g_i$ will be equal to: $(sum(s) - sum(s[:(k-1)]))$ (I’m using slicing to simplify the notation).

Code:

#include <cstdio>
#include <set>
#include <algorithm>
#include <functional>

#define Max(a, b) ((a) > (b) ? (a) : (b))
#define Min(a, b) ((a) < (b) ? (a) : (b))

#define FOR(i, s, e, inc) for (int i=(s) ; i<(int)(e) ; i+=(inc))
#define RFOR(i, s, e, dec) for (int i=(s) ; i>=(int)(e) ; i-=(dec))
#define REP(i, e) FOR(i, 0, e, 1)
#define RREP(i, s) RFOR(i, s, 0, 1)

using namespace std;

int n;
int sum;
pair<int, int> legs;
multiset<int, greater<int>> past_legs;
int energy_so_far = 0;

void find_left_sum() {
sum[n] = 0;
sum[n-1] = legs[n-1].second;
RREP(i, n-2) {
sum[i] = sum[i + 1] + legs[i].second;
}
}

int solve_one(int k, int init, int end) {
int total_energy = sum[end + 1] + energy_so_far;
auto it = past_legs.begin();
for (int i=0 ; i<(k-1) && it != past_legs.end() ; i++) {
total_energy -= (*it);
it++;
}

for (int i=init ; i<=end ; i++) {
past_legs.insert(legs[i].second);
energy_so_far += legs[i].second;
}

}

int main() {
while (scanf("%d\n", &n) == 1) {
past_legs.clear();
energy_so_far = 0;
REP(i, n) {
scanf("%d", &legs[i].first);
}

REP(i, n) {
scanf("%d", &legs[i].second);
}

sort(legs, legs + n);
find_left_sum();

int k, init, cur_l;
int ans = 1e9;
k = 1;
cur_l = legs.first;
init = 0;
for (int i=1 ; i<n ; i++) {
if (legs[i].first != cur_l) {
// Process the set of legs
int energy = solve_one(k, init, i-1);
ans = Min(ans, energy);
k = 1;
init = i;
cur_l = legs[i].first;
}
else {
k++;
}
}

int energy = solve_one(k, init, n-1);
printf("%d\n", Min(ans, energy));
}
}


### 555B – Case of Fugitive (Codeforces)

This is an very interesting problem; The full description is here.

Problem:
Given a set of islands (represented by their $[l, r]$ endpoints), and bridges (represented by their width), we are to assign each one of these bridges to the gaps between the islands so that all of them are connected.

Solution:
The first thing to notice is that what we really care about are the gaps between two contiguous islands, and not the islands themselves. Specifically, a gap between two islands will be represented by a pair of numbers $(min_d, max_d)$, the minimum and the maximum width a bridge can have in order to cover the gap. These two values are easily obtained: $min_d = l_{i+1} - r_i$ and $max_d = r_{i+1} - l_i$ for all $i$. Now, for every gap, we need to find the right bridge to use, and then, remove that bridge from the set of available bridges.

So far, so good; we have a general idea how to solve this problem. However, there are still some big details missing: how do we pick the right gap to assign a bridge to? How do we search (optimally) for the bridge? How do we delete this bridge from the list of available bridges?

Well, for the last two, our friend set comes to the rescue! As most of you might know, sets in c++ are implemented using a BST. This makes them suitable for search (log time) and deletion (constant amortized time). Hence, we will represent the available bridges as a set. For the first question, I came up with two strategies (only one during contest time), which I will mention for the sake of completeness.

Sort the gaps by minimum size (wrong strategy):
The first strategy that I came up with was to sort the gaps increasingly by the minimum distance and then iterate over this sorted list and look for the right bridge. This strategy fails in cases like the following:

Islands: (1, 8)(9, 10)(12, 13)
Gaps (Sorted by min_d): (1, 10)(2, 4)
Bridges: 3 5


As you see, by using this approach, we will assign the first bridge (width 3) to the first gap, and then (incorrectly) assume that there is no satisfying assignment. This heuristic does not work because we fail to account for the fact that gaps with bigger differences on the endpoints can be covered by bridges of bigger sizes. Which leads us to the second approach…

Sort the gaps by maximum size (accepted strategy):
If you look closely, you will realize (I didn’t at the time of the contest) that sorting the bridges based on the $max_d$ will overcome this issue. Specifically, if we sort the gaps based on their $max_d$, we get the nice property that we will take care first of the gaps with smaller separation while still giving priority to those with smaller $min_d$.

Disclaimer: I didn’t solve this problem on the round, and I did have some help looking at the comments after the round ended.

Note: The code looks a bit cluttered because we were asked for the indices of the bridges to use, instead of their values, which made it necessary to store some additional information.

Code:
Here is the code.

#include <cstdio>
#include <iostream>
#include <vector>
#include <set>
#include <algorithm>

using namespace std;

typedef long long ll;
typedef pair<pair<ll, ll>, int> lll;
typedef pair<int, int> ii;

int N, M;
ll l1, r1, l2, r2;
// Min, Max distances
vector<lll> to_build;
set<pair<ll, int>> bridges;

void _solve(int N, int M) {
to_build.clear();
bridges.clear();

cin >> l1 >> r1;
for (int i=0 ; i<N-1 ; i++) {
cin >> l2 >> r2;
to_build.push_back(make_pair(make_pair(l2 - r1, r2 - l1), i));
l1 = l2;
r1 = r2;
}

for (int i=0 ; i<M ; i++) {
ll b;
cin >> b;
bridges.insert(make_pair(b, i + 1));
}

sort(to_build.begin(), to_build.end(), [](lll a, lll b) {
return a.first.second < b.first.second;
});
vector<ii> ans;
for (auto tb : to_build) {
pair<ll, ll> gap = tb.first;
auto it = bridges.lower_bound(make_pair(gap.first, 0LL));
if (it == bridges.end() || (it->first) > gap.second) {
printf("No\n");
return;
}
ans.push_back(make_pair(tb.second, it->second));
bridges.erase(it);
}

sort(ans.begin(), ans.end());
printf("Yes\n");
for (auto i : ans) {
printf("%d ", i.second);
}
printf("\n");
}

int main() {
while (scanf("%d %d\n", &N, &M) == 2) {
_solve(N, M);
}
}


This is an oldie but goodie. The problem statement can be found here.

Problem:

This is a really cool problem that I found back in the day when I was getting started on CP. It deals with expected values and probabilities. Back then, I solved the small input, but the large input was just dreadfully difficult (for me). After the contest, I read the solution, and, guess what?, I didn’t get any of it!. Today, finally, I managed to solve it! (It is worth pointing out that I didn’t remember the solution, due to the lack of understanding it back then). The problem goes essentially like this: you have typed some characters of your password, and you know the probabilities of having typed these correctly. Given some fixed strategies, you need to find the one that minimizes the expected number of total keystrokes that should be made in order to type your password correctly. The strategies can be summarized in 3:

1. Keep typing, and, if wrong, retype the whole thing again.
2. Press enter and start over.
3. Delete k characters, and then finish typing. If it is wrong, retype the whole thing again.

Every time you retype a character, you are sure you will get it right. As input, we are given two values, A and B, that represent the number of characters that have been typed and the length of the password, respectively. Then, for each A, we are given the probability of it being correctly typed.

Solution:

In order to solve this problem, we first need to brush up on expected values. Expected values are always associated random variables (RVs from now on). A RV is, in the simplest terms, something that can take values with certain probability. Think of it as a way to describe an experiment: it encapsulates information about what the outcome of the experiment will be, along with the probability of that specific outcome occurring. In our case, we will have 1 experiment (1 RV) for each one of cases 1 and 2, and several for case 3. Intuitively, the expected value of a RV, $X$, denoted as $E[X]$ is the value that the RV will take on the average case (you can read more about expected values on the wiki). The way to calculate $E[X]$ is by averaging out all the possible outcomes and its respective probabilities, that is $E[X]=\sum x\cdot p(x)$. for case 2, there are no probabilities involved. We always have the same amount of keystrokes, therefore our expected value is easy to calculate. Specifically, we need to press <enter>, type the password and then press <enter> again (2 + B total keystrokes). For strategies 1 and 3, there is a key observation that will help us solve the problem. Note that our RVs can only take up up to two values: One value when we get the password right on the first try (that is, when we don’t have to retype the whole thing) and another value when we don’t (we got it wrong, so we have to retype it all again). This means that we only need to calculate two probabilities, call them $p$ and $q$. Furthermore, by the laws of probabilities, we also know that $p + q = 1$.

The way we calculate $E[X]$ for each RV is by traversing the list from beginning to end. When on index $i$, we are assuming that all the characters from the right up till $i$ have been deleted, therefore the probability of the password being correctly typed equals the multiplication of all the probabilities from 0 to i. Let’s call this value $p$. As you might have imagined, the probability of a word being incorrectly typed would then be $q = 1 - p$. After this is dealt with, we need to find out how many strokes we have to type in order to get the password correct. This is a fun exercise that will be left to the avid mind of the reader :D.

There is one small gotcha, though: the multiplication of lots of numbers can cause loss of precision, which is why we use logarithms in order to turn the multiplication into a sum, followed by exponentiation to convert this sum back into a multiplication.

All in all, this problem was really fun to solve. I hope you enjoy it as much as I did.

import sys
import math

for case in range(T):
ps = map(math.log, ps)

ans = 1e9
tot_prob = 0
tot_prob += p_log
p = math.exp(tot_prob)
q = 1 - p
correct = 2*(A - (i+1)) + (B - A + 1)
incorrect = correct + B + 1

ans = min(ans, p * correct + q * incorrect)

print("Case #%d: %lf" % (case + 1, min(ans, 2 + B)))


### 10730 – Antiarithmetic? (UVa)

Problem statement:

The problem is simple, given a permutation of N, we are to find whether it contains an arithmetic progression of exactly three elements or not. If it doesn’t, the sequence is called antiarithmetic.

Solution:

There is several ways of approaching this problem, the simplest being three nested for loops. Because of the input size, it is evident that this won’t work; we have to get smarter. The key is to notice that all the numbers in the sequence are all less than N, this will enable us to say whether the third element of a progression will appear at some point in the sequence. The way this is accomplished is simple: compare every element with all the subsequent elements, and check whether the third element on this progression will appear further ahead on the sequence. As an example, consider the sequence 5, 3, 4, 1, 0, 2. Initially, we compare 5 and 3, giving us that the third element on the progression must be 1; because we haven´t seen a 1 so far, and, because we know it must be there, we conclude the sequence has an arithmetic sequence, thus, it is not antiarithmetic.

Code:

int N, arr;

bool solve() {
int seen;
for (int i=0 ; i<N ; i++) {
memset(seen, 0, sizeof seen);
for (int w=0 ; w<=i ; w++) {
seen[arr[w]] = 1;
}

for (int j=i+1 ; j<N ; j++) {
seen[arr[j]] = 1;
int next_ = arr[j] + arr[j] - arr[i];
if (next_ >= 0 && next_ < N && j < N-1) {
if (!seen[next_]) {
// printf("%d %d %d\n", i, j, next_);
return false; // There is a progression
}
}
}
}

return true;
}

int main() {
while (scanf("%d: ", &N), N) {
for (int i=0 ; i<N ; i++) {
scanf("%d", &arr[i]);
}

if (solve()) {
printf("yes\n");
}
else {
printf("no\n");
}
}
}


### 12668 – Attacking rooks (UVa)

The problem statement can be found here.

Problem:

I found this problem to be extremely interesting for two reasons: First, it “closely” related to a classical chess problem, which leads to an almost immediate understanding of the problem statement. Second, the modification made that sets this problem apart from the classical one, makes the solution to be entirely different to what it would normally be.

We are asked to placed the maximum number of rooks in a chess board of size N, such that the placed rooks are not attacking each other in any way. So far, this is nothing new, just place the rooks in the diagonal. But now the plot twist: there are some peons already placed on the board, such that dealing with a rook attacking another one becomes non-trivial. For example, consider the case there is a peon located in the middle of row 0, this then makes it possible to locate two rooks in such row. Now it should be evident why this is non-trivial.

Solution:

The solution to this problem has to do with an extremely exciting and useful technique: Maximum Flows. This specific problem is ultimately asking us to find the maximum bipartite matching between two sets of vertices, which is in turn an specialization of the Maximum Flow problem. I will try to walk you through the derivation on how to visualize this problem as a bipartite matching (maximum flow) problem.

First, let’s assume there are no peons, but let’s try to find a different, more elaborate solution than just putting elements in the diagonal: when we put a rook in, say, position (0, 0), row 0 and column 0 are automatically disabled. Therefore, putting a rook in any point, will disable its respective row and column. If we imagine that our two sets to be matched are the rows and the columns, and (at first) that every row can be matched to every column, we obtain a suitable structure that resembles a bipartite matching problem: by finding the maximum number of rows that can be matched (used to place rooks), we are also finding the maximum number of rooks that can be placed in the board overall. Up to this point, we have not done anything very exciting, we have just found a more complicated way to solve the traditional problem of placing rooks in an empty chess board. Now, let’s assume there are several peons. In order to reflect this change in our model, we will create some new imaginary rows and columns that will be solved using our previous model. The next picture illustrates how the new distribution would be, given peons located in the positions marked with ‘X’. After selecting this rows and columns, we can just make the corresponding connections between this new rows and columns and then run the maximum matching algorithm in order to obtain the answer.

The implementation details are left for the reader as an exercise, I will leave mine here, and if you need any assistance with it, just leave a comment.

Code:

#include <cstdio>
#include <vector>
#include <cstring>
#include <map>
#include <string>
#include <set>

#define rep(i, n) for (int i=0 ; i<n ; i++)

using namespace std;

typedef pair<int, int> ii;
typedef vector<int> vi;
typedef vector<ii> vii;

char board;
int N;
int rows, cols;
int fake_row;
int r;
bool seen;

bool can_match(int u) {
for (int v=0 ; v<cols ; v++) {
seen[v] = true;
if (r[v] < 0 || can_match(r[v])) {
r[v] = u;
return true;
}
}
}
return false;
}

int max_matching() {
memset(r, -1, sizeof r);
int ans = 0;
for (int u=0 ; u<rows ; u++) {
memset(seen, 0, sizeof seen);
if (can_match(u)) {
ans++;
}
}
return ans;
}

int main() {
while (scanf("%d\n", &N) == 1) {
rows = cols = 0;
memset(fake_row, -1, sizeof fake_row);

rep(i, N) {
gets(board[i]);
}

int ix, iy;
ix = iy = -1;

rep(i, N) {
ix = -1;
rep(j, N) {
if (board[i][j] == 'X') {
for (int w=ix+1 ; w<=j-1 ; w++) {
fake_row[i][w] = rows;
}

if (j-1 >= ix+1)
rows++;
ix = j;
}
}
for (int w=ix+1 ; w<=N-1 ; w++) {
fake_row[i][w] = rows;
}
if (N-1 >= ix+1) {
rows++;
}
}

rep(j, N) {
iy = -1;
rep(i, N) {
if (board[i][j] == 'X') {
for (int w=iy+1 ; w<=i-1 ; w++) {
}
if (i-1 >= iy+1) {
cols++;
}
iy = i;
}
}
for (int w=iy+1 ; w<=N-1 ; w++) {
}
if (N-1 >= iy+1) {
cols++;
}
}

printf("%d\n", max_matching());
}
}


### 12641 – Reodrnreig Lteetrs in Wrods (UVa).

The problem statement can be found here.

Problem:

We are given words with the middle letters scrambled and the edge letters in its correct position, we are then asked to output the correct word based on a dictionary that is also given to us.

Solution:

This problem is rather easy, and once you find a correct representation for the words, you can solve it really without much coding. I have decided to represent the given dictionary of words using 2 maps. The first, maps the edge characters (The first and the last character of a word) to the second map. The second map in turn maps the sorted middle characters (all the characters but the first and last one) to the original word. For example, if you are given the word communicate, the representation inside of the maps would be "ce" -> ("acimmnotu" -> "communicate"). Which will be the same one if the middle letters are scrambled, which allows to solve the problem in a neat way.

Code:

map<string, map<string, string> > dict;
int N;

string translate_word(string word) {
if (word.size() <= 2) {
return word;
}
string edges = string(word, 1) + string(word[word.size()-1], 1);
string middle = word.substr(1, word.size() - 2);
sort(all(middle));
if (dict.find(edges) != dict.end()) {
if (dict[edges].find(middle) != dict[edges].end()) {
return dict[edges][middle];
}
}
return word;
}

int main() {
string line, word;
scanf("%d\n", &N);
while (N--) {
getline(cin, line);
dict.clear();
istringstream is(line);
while (is >> word) {
if (word.size() > 2) {
string edges = string(word, 1) + string(word[word.size()-1], 1);
string middle = word.substr(1, word.size() - 2);
sort(all(middle));
if (dict.find(edges) == dict.end()) {
dict[edges] = map<string, string>();
}
if (dict[edges].find(middle) == dict[edges].end()) {
dict[edges][middle] = word;
}
}
}

getline(cin, line);
is.clear();
is.str(line);
int c = 0;
while (is >> word) {
if (c++) cout << " ";
cout << translate_word(word);
}
cout << endl;
}
}


### 12640 – Largest Sum Game (UVa).

The problem statement can be found here.

Problem:

This is just a very traditional problem, finding the max sum of a subarray.

Solution:

To solve this problem, we just need kadane’s algorithm. The intuition behind this can be obtained by taking a look at the cumulative sum of the original array. If you want a more detailed explanation, you can check this link or post a comment.

Code:

#include <string>
#include <vector>
#include <iostream>
#include <sstream>
#include <cstdio>

#define max(x, y) (x) > (y) ? (x) : (y)

using namespace std;

typedef vector<int> vi;

vi numbers;

int max_here, max_so_far;
max_here = max_so_far = 0;
for (int i=0 ; i<numbers.size() ; i++) {
max_here = max(0, max_here + numbers[i]);
max_so_far = max(max_so_far, max_here);
}
return max_so_far;
}

int main() {
string line;
while (getline(cin, line)) {
istringstream is(line);
numbers.clear();
int temp;
while (is >> temp) {
numbers.push_back(temp);
}