This paper claims to have found a technique to resum the loop diagrams of general relativity and potentially all other renormalizable as well as non-renormalizable theories and to calculate UV finite results.
Of course that with all my respect to the author, I think that all such attempts are most likely wrong. There exists one general and essential question that all the authors of such papers - much like all researchers in loop quantum gravity and Lee Smolin in particular - seem to misunderstand. The real problem is not to obtain finite results; the real problem is to obtain results that do not depend on infinitely many unknown parameters.
Loop quantum gravity is another framework that is often claimed to be "finite" because it has a minimal distance scale. In terminology of the people who were taught quantum field theory in the same way as I was, it is not a finite theory. Instead, it is a theory with a cutoff. A finite theory is a theory that does not depend on the regulator. In other words, loop quantum gravity, Ward's paper, as well as dozens of similar attempts are just proposing another regularization and another set of conventions.
We have many other ways to regulate UV divergent integrals - such as a sharp momentum cutoff. Some other regulators such as dimensional regularization preserve some symmetries such as gauge symmetries automatically - and Ward's approach probably also cancels some unwanted diff-violating diagrams automatically. But such a regularization is not yet a way to obtain unique results from the theory.
Even if you consider terms that respect all required symmetries only, there are still many terms that can be added to the classical or effective action or that can be generated by the loops. Even if you don't read Ward's paper in detail, it seems obvious that he or she argues that a finite answer can be obtained even if you include more complicated interactions. And that's the problem because these coupling constants are not known.
If you consider a classical or an effective action in GR, there can be higher-derivative terms such as "R^2" where "R" is the Riemann tensor, with various contractions of the indices. All their coefficients are a priori undetermined. The loop diagrams generate infinite contributions to these coefficients in the effective action. But that's not the real problem. The infinite parts can be subtracted by a counterterm. The real problem is that the remaining finite value of the coefficient is unknown. Equivalently, the actual problem is that you can add higher-derivative terms with finite coefficients to the classical action and obtain an equally consistent theory that must be treated as equally plausible according to Gell-Mann's totalitarian principle.
In renornalizable theories, the finite piece of a coupling constant such as the fine-structure constant can be determined experimentally. The values of the remaining - irrelevant - coefficients can be naturally set to zero (or small calculable values) from the requirement that the quantum field theory remains valid at very high energy scales.
But such a treatment is impossible in non-renormalizable theories such as the Fermi theory or general relativity. These theories break above the weak scale or above the Planck scale, respectively. They break down even if you include the simplest interactions only. Because of this fact, you are forced to consider all other interaction terms that break down at the same scale. And there are infinitely many.
You might choose one particular set of values of the infinitely many coefficients because they look "nicer" and "simpler" on paper. But looking "nicer" or "simpler" on paper is not a physically motivated criterion, unlike the validity of renormalizable theories at very high energy scales. There is no physical reason to argue that one set of values of the coefficients of the non-renormalizable interactions is better than others. You are still choosing a random point in an infinite-dimensional space and the only question is whether you realize this fact.
Because the author clearly seems to disagree with these points - to put it very democratically - that are viewed as the standard material of courses such as Quantum Field Theory II by others, it is extremely difficult to read his or her paper for me. It just seems so manifestly wrong that it is probably a waste of time.
But on the other hand, I can imagine that there exist some valid and general insights about the loop structure of general relativity, after all. String theory does predict a particular form of the higher-derivative terms in the effective action. I tend to believe that if you look at n-derivative terms in the effective action, some of their features will be general because these very high dimension operators could be related to black hole physics: the black hole creation at trans-Planckian scattering and black hole evaporation.
For example, in M-theory, I only expect "c_k . R^(3k+1)" terms to appear in the effective action, and the coefficients "c_k" for large values of "k" should have a characteristic behavior - a function of "k factorial" or something like that - much like the stringy amplitudes have a characteristic very-high-genus behavior. Anyone knows a more detailed answer?
At any rate, I don't believe that such questions can be answered by a paper whose author seems to deny what the real problem actually is.
Unitarity
S.R. has argued that unitarity is very constraining and a finite unitary S-matrix is hard and would be interesting. I agree - and think that all unitary finite completions of the low-energy gravitational S-matrix must follow from string theory - but experience suggests that all approaches with a "universal" treatment of UV problems are non-unitary.
Recall another example of this sort: if you can add a "PHI.box^2.PHI" term into your Klein-Gordon theory (and analogous terms to other theories), it seemingly makes the propagator diminish as "1/p^4" at high energies. Such an improved behavior makes the loop diagram converge more rapidly. However, this theory is not unitary because with the extra four-derivative term, the theory is equivalent to another theory with two scalar fields one of which has a wrong sign of the kinetic term (ghost).
The fourth power is a rather simple candidate for a cure. More generic recipes to make a sum finite probably have singularities with wrong residues, too. Because this question is not discussed in the paper at all, I suppose that there is no reason why only poles with the right sign of the residues appear. The method leads to ghosts and is not unitary.
Also, I am convinced that any unitary theory that reduces to GR at the classical level must confirm the existence of black holes in high-energy scattering due to the unitarity. Because the high-energy behavior in the present paper looks different than expected from the black hole production, the obtained amplitudes can't be unitary. It is the black hole microstates that become relevant at the same energy scale where the divergent graphs become important - the trans-Planckian regime - and any paper that tries to deny this fact is fundamentally flawed.
New degrees of freedom as seen in string theory are not a "mathematical artifact" as the author seems to suggest but objects that are necessary for a smooth behavior of the full theory including gravity at the quantum level much like the W-bosons are needed to make sense out of the four-fermion theory and a Higgs-like pole is needed to restore the unitarity of the WW scattering. In weakly coupled string theory, the lightest such new objects are excited strings. These new states become progenitors of black holes if you increase the string coupling. At a generic coupling - for example in M-theory - the new degrees of freedom (new "quantum fields") correspond to black hole microstates. In all cases, these objects lead to new non-analyticities of the S-matrix.
These objects are real and attempts to live without them are as flawed as attempts to describe weak interactions at all scales without W-bosons.