Reactor by Example--转

原文地址:https://www.infoq.com/articles/reactor-by-example

Key takeaways

  • Reactor is a reactive streams library targeting Java 8 and providing an Rx-conforming API
  • It uses the same approach and philosophy as RxJava despite some API differences
  • It is a 4th generation reactive library that allows operator fusion, like RxJava 2
  • Reactor is a core dependency in the reactive programming model support of Spring Framework 5.

RxJava recap

Reactor, like RxJava 2, is a fourth generation reactive library. It has been launched by Spring custodian Pivotal, and builds on the Reactive Streams specification, Java 8, and the ReactiveX vocabulary. Its design is the result of a savant mix fueled by designs and core contributors from Reactor 2 (the previous major version) and RxJava.

In previous articles in this series, "RxJava by Example" and "Testing RxJava", you learned about the basics of reactive programming: how data is conceptualized as a stream, the Observable class and its various operators, the factory methods that create Observables from static and dynamic sources.

Observable is the push source and Observer is the simple interface for consuming this source via the act of subscribing. Keep in mind that the contract of an Observable is to notify its Observer of 0 or more data items through onNext, optionally followed by either an onError or onComplete terminating event.

To test an Observable, RxJava provides aTestSubscriber, which is a special flavor of Observer that allows you to assert events in your stream.

In this article we'll draw a parallel between Reactor and what you already learned about RxJava, and showcase the common elements as well as the differences.

Reactor's types

Reactor's two main types are the Flux<T> and Mono<T>. A Flux is the equivalent of an RxJavaObservable, capable of emitting 0 or more items, and then optionally either completing or erroring.

A Mono on the other hand can emit at most once. It corresponds to both Single and Maybetypes on the RxJava side. Thus an asynchronous task that just wants to signal completion can use a Mono<Void>.

This simple distinction between two types makes things easy to grasp while providing meaningful semantics in a reactive API: by just looking at the returned reactive type, one can know if a method is more of a "fire-and-forget" or "request-response" (Mono) kind of thing or is really dealing with multiple data items as a stream (Flux).

Both Flux and Mono make use of this semantic by coercing to the relevant type when using some operators. For instance, calling single() on a Flux<T> will return a Mono<T>, whereas concatenating two monos together using concatWith will produce a Flux. Similarly, some operators will make no sense on a Mono (for example take(n), which produces n > 1 results), whereas other operators will only make sense on a Mono (e.g. or(otherMono)).

One aspect of the Reactor design philosophy is to keep the API lean, and this separation into two reactive types is a good middle ground between expressiveness and API surface.

"Build on Rx, with Reactive Streams at every stage"

As expressed in "RxJava by Example", RxJava bears some superficial resemblance to Java 8 Streams API, in terms of concepts. Reactor on the other hand looks a lot like RxJava, but this is of course in no way a coincidence. The intention is to provide a Reactive Streams native library that exposes an Rx-conforming operator API for asynchronous logic composition. So while Reactor is rooted in Reactive Streams, it seeks general API alignment with RxJava where possible.

Reactive Libraries and Reactive Streams adoption

Reactive Streams (abbreviated RS in the remainder of this article) is "an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure". It is a set of textual specifications along with a TCK and four simple interfaces (PublisherSubscriber,Subscription and Processor), which will be integrated in Java 9.

It mainly deals with the concept of reactive-pull back-pressure (more on that later) and how to interoperate between several implementing reactive sources. It doesn't cover operators at all, focusing instead exclusively on the stream's lifecycle.

A key differentiator for Reactor is its RS first approach.Both Flux and Mono are RS Publisherimplementations and conform to reactive-pull back-pressure.

In RxJava 1 only a subset of operators support back-pressure, and even though RxJava 1 has adapters to RS types, its Observable doesn't implement these types directly. That is easily explained by the fact that RxJava 1 predates the RS specification and served as one of the foundational works during the specification's design.

That means that each time you use these adapters you are left with a Publisher, which again doesn't have any operator. In order to do anything useful from there, you'll probably want to go back to an Observable, which means using yet another adapter. This visual clutter can be detrimental to readability, especially when an entire framework like Spring 5 directly builds on top of Publisher.

Another difference with RxJava 1 to keep in mind when migrating to Reactor or RxJava 2 is that in the RS specification, null values are not authorized. It might turn out important if your code base uses null to signal some special cases.

RxJava 2 was developed after the Reactive Streams specification, and thus has a direct implementation of Publisher in its new Flowable type. But instead of focusing exclusively on RS types, RxJava 2 also keeps the "legacy" RxJava 1 types (ObservableCompletable, and Single)  and introduces the "RxJava Optional", Maybe. Although they still provide the semantic differentiation we talked about earlier, these types have the drawback of not implementing RS interfaces. Note that unlike in RxJava 1, Observable in RxJava 2 does not support the backpressure protocol in RxJava 2 (a feature now exclusively reserved to Flowable). It has been kept for the purpose of providing a rich and fluent API for cases, such as user interface eventing, where backpressure is impractical or impossible. CompletableSingle and Maybe have by design no-need for backpressure support, they will offer a rich API as well and defer any workload until subscribed.

Reactor is once again leaner in this area, sporting its Mono and Flux types, both implementingPublisher and both backpressure-ready. There's a relatively small overhead for Mono to behave as a Publisher, but it is mostly offsetted by other Mono optimizations. We'll see in a later section what backpressure means for Mono.

An API similar but not equal to RxJava's

The ReactiveX and RxJava vocabulary of operators can be overwhelming at times, and some operators can have confusing names for historical reasons. Reactor aims to have a more compact API and to deviate in some cases, e.g. in order to choose better names, but overall the two APIs look a lot alike. In fact the latest iterations in RxJava 2 actually borrow some vocabulary from Reactor as well, a hint of the ongoing close collaboration between the two projects. Some operators and concepts first appear in one library or the other, but often end up in both.

For instance, Flux has the same familiar just factory method (albeit having only two justvariants: one element and a vararg). But from, has been replaced by several explicit variants, most notable being fromIterable. Flux also has all the usual suspects in term of operators: map,mergeconcatflatMaptake…, etc.

One example of an RxJava operator name that Reactor eschewed was the puzzling amboperator, which has been replaced with the more appropriately named firstEmitting. Additionally, to introduce greater consistency in the API, toList has been renamed collectList. In fact all collectXXX operators now aggregate values into a specific type of collection but still produce a Mono of said collection, while toXXX methods are reserved for type conversions that take you out of the reactive world, eg. toFuture().

One more mean by which Reactor can be leaner, this time in terms of class instantiation and resource usage, is fusion: Reactor is capable of merging multiple sequential uses of certain operators (eg. calling concatWith twice) into a single use, only instantiating the operator's inner classes once (macro-fusion). That includes some data source based optimization which greatly helps Mono offset the cost of implementing Publisher. It is also capable of sharing resources like inner queues between several compatible operators (micro-fusion). These capabilities make Reactor a fourth-generation reactive library. But that is a topic for a future article.

Let's take a closer look at a few Reactor operators. (You will notice the contrast with some of the examples in the earlier articles in our series.)

A few operator examples

(This section contains snippets of code, and we encourage you to try them and experiment further with Reactor. To that effect, you should open your IDE of choice and create a test project with Reactor as a dependency.)

To do so in Maven, add the following to the dependencies section of your pom.xml:

<dependency><groupId>io.projectreactor</groupId>	<artifactId>reactor-core</artifactId><version>3.0.3.RELEASE</version>
</dependency>

To do the same in Gradle, edit the dependencies section to add reactor, similarly to this:

dependencies {compile "io.projectreactor:reactor-core:3.0.3.RELEASE"
}

Let's play with examples used in the previous articles in this series!

Very similarly to how you would create your first Observable in RxJava, you can create a Fluxusing the just(T…) and fromIterable(Iterable<T>) Reactor factory methods. Remember that given a Listjust would just emit the list as one whole, single emission, while fromIterable will emit each element from the iterable list:

public class ReactorSnippets {private static List<String> words = Arrays.asList("the","quick","brown","fox","jumped","over","the","lazy","dog");@Testpublic void simpleCreation() {Flux<String> fewWords = Flux.just("Hello", "World");Flux<String> manyWords = Flux.fromIterable(words);fewWords.subscribe(System.out::println);System.out.println();manyWords.subscribe(System.out::println);}
}

Like in the corresponding RxJava examples, this prints
Hello
World

the
quick
brown
fox
jumped
over
the
lazy
dog

In order to output the individual letters in the fox sentence we'll also need flatMap (as we did in RxJava by Example), but in Reactor we use fromArray instead of from. We then want to filter out duplicate letters and sort them using distinct and sort. Finally, we want to output an index for each distinct letter, which can be done using zipWith and range:

@Test
public void findingMissingLetter() {Flux<String> manyLetters = Flux.fromIterable(words).flatMap(word -> Flux.fromArray(word.split(""))).distinct().sort().zipWith(Flux.range(1, Integer.MAX_VALUE),(string, count) -> String.format("%2d. %s", count, string));manyLetters.subscribe(System.out::println);
}

This helps us notice the s is missing as expected:

1. a
2. b
...
18. r
19. t
20. u
...
25. z

One way of fixing that is to correct the original words array, but we could also manually add the "s" value to the Flux of letters using concat/concatWith and a Mono:

@Test
public void restoringMissingLetter() {Mono<String> missing = Mono.just("s");Flux<String> allLetters = Flux.fromIterable(words).flatMap(word -> Flux.fromArray(word.split(""))).concatWith(missing).distinct().sort().zipWith(Flux.range(1, Integer.MAX_VALUE),(string, count) -> String.format("%2d. %s", count, string));allLetters.subscribe(System.out::println);
}

This adds the missing s just before we filter out duplicates and sort/count the letters:

1. a
2. b
...
18. r
19. s
20. t
...
26. z

The previous article noted the resemblance between the Rx vocabulary and the Streams API, and in fact when the data is readily available from memory, Reactor, like Java Streams, acts in simple push mode (see the backpressure section below to understand why). More complex and truly asynchronous snippets wouldn't work with this pattern of just subscribing in the main thread, primarily because control would return to the main thread and then exit the application as soon as the subscription is done. For instance:

@Test
public void shortCircuit() {Flux<String> helloPauseWorld = Mono.just("Hello").concatWith(Mono.just("world").delaySubscriptionMillis(500));helloPauseWorld.subscribe(System.out::println);
}

This snippet prints "Hello", but fails to print the delayed "world" because the test terminates too early. In snippets and tests where you only sort of write a main class like this, you'll usually want to revert back to blocking behavior. To do that you could create a CountDownLatch and callcountDown in your subscriber (both in onError and onComplete). But then that's not very reactive, is it? (and what if you forget to count down, in case of error for instance?)

The second way you could solve that issue is by using one of the operators that revert back to the non-reactive world. Specifically, toIterable and toStream will both produce a blocking instance. So let's use toStream for our example:

@Test
public void blocks() {Flux<String> helloPauseWorld = Mono.just("Hello").concatWith(Mono.just("world").delaySubscriptionMillis(500));helloPauseWorld.toStream().forEach(System.out::println);
}

As you would expect, this prints "Hello" followed by a short pause, then prints "world" and terminates.

As we mentioned above, RxJava amb() operator has been renamed firstEmitting (which more clearly hints at the operator's purpose: selecting the first Flux to emit). In the following example, we create a Mono whose start is delayed by 450ms and a Flux that emits its values with a 400ms pause before each value. When firstEmitting() them together, since the first value from theFlux comes in before the Mono's value, it is the Flux that ends up being played:

@Test
public void firstEmitting() {Mono<String> a = Mono.just("oops I'm late").delaySubscriptionMillis(450);Flux<String> b = Flux.just("let's get", "the party", "started").delayMillis(400);Flux.firstEmitting(a, b).toIterable().forEach(System.out::println);
}

This prints each part of the sentence with a short 400ms pause between each section.

At this point you might wonder, what if you're writing a test for a Flux that introduces delays of 4000ms instead of 400? You don't want to wait 4s in a unit test! Fortunately, we'll see in a later section that Reactor comes with powerful testing facilities that nicely cover this case.

But for now, we have sampled how Reactor compares for a few common operators, so let's zoom back and have a look at other differentiating aspects of the library.

A Java 8 foundation

Reactor targets Java 8 rather than previous Java versions. This is once again aligning with the goal of reducing the API surface: RxJava targets Java 6 where there is no java.util.functionpackage so classes like Function or Consumer can't be leveraged. Instead they had to add specific classes like Func1Func2Action0Action1, etc. In RxJava 2 these classes mirrorjava.util.function the way Reactor 2 used to do when it still had to support Java 7.

The Reactor API also embraces types introduced in Java 8. Most of the time-related operators will be about a duration (eg. timeoutintervaldelay, etc.), so using the Java 8 Duration classis appropriate.

The Java 8 Stream API and CompletableFuture can also both be easily converted to a Flux/Mono, and vice-versa. Should we usually convert a Stream to a Flux though? Not really. The level of indirection added by Flux or Mono is a negligible cost when they decorate more costly operations like IO or memory-bound operations, but most of the time a Stream doesn't imply that kind of latency and it is is perfectly ok to use the Stream API directly. Note that for these use cases in RxJava 2 we'd use the Observable, as it is not backpressured and thus becomes a simple pushuse case once you've subscribed. But Reactor is based on Java 8, and the Stream API is expressive enough for most use cases. Note also that even though you can find Flux and Monofactories for literal or simple Objects, they mostly serve the purpose of being combined in higher level flows. So typically you wouldn't want to transform an accessor like "long getCount()" into a "Mono<Long> getCount()" when migrating an existing codebase to reactive patterns.

The Backpressure story

One of the main focuses (if not the main focus) of the RS specification and of Reactor itself isbackpressure. The idea of backpressure is that in a push scenario where the producer is quicker than the consumer, there's value in letting the consumer signal back to the producer and say "Hey! Slow down a little, I'm overwhelmed". This gives the producer a chance to control its pace rather than having to resort to discarding data (sampling) or worse, risking a cascading failure.

You may wonder at this point where backpressure comes into the picture with Mono: what kind of consumer could possibly be overwhelmed by a single emission? Short answer is "probably none". However, there's still a key difference between how a Mono works and how aCompletableFuture works. The latter is push only: if you have a reference to the Future, it means the task processing an asynchronous result is already executing. On the other hand, what a backpressured Flux or Mono enables is a deferred pull-push interaction:

  1. Deferred because nothing happens before the call to subscribe()
  2. Pull because at the subscription and request steps, the Subscriber will send a signal upstream to the source and essentially pull the next chunk of data
  3. Push from producer to consumer from there on, within the boundary of the number of requested elements

For Monosubscribe() is the button that you press to say "I'm ready to receive my data". For Flux, this button is request(n), which is kind of a generalization of the former.

Realizing that Mono is a Publisher that will usually represent a costly task (in terms of IO, latency, etc.) is critical to understanding the value of backpressure here: if you don't subscribe, you don't pay the cost of that task. Since Mono will often be orchestrated in a reactive chain with regular backpressured Flux, possibly combining results from multiple asynchronous sources, the availability of this on-demand subscribe triggering is key in order to avoid blocking.

Having backpressure helps us differentiate that last use case from another Mono broad use case: asynchronously aggregating data from a Flux into a Mono. Operators like reduce and hasElementare capable of consuming each item in the Flux, aggregating some form of data about it (respectively the result of a reduce function and a boolean) and exposing that data as a Mono. In that case, the backpressure signalled upstream is Long.MAX_VALUE, which lets the upstream work in a fully push fashion.

Another interesting aspect of backpressure is how it naturally limits the amount of objects held in memory by the stream. As a Publisher, the source of data is most probably slow (at least slowish) at producing items, so the request from downstream can very well start beyond the number of readily available items. In this case, the whole stream naturally falls into a push pattern where new items are notified to the consumer. But when there is a production peak and the pace of production accelerates, things fall nicely back into a pull model. In both cases, at most N data (the request() amount) is kept in memory.

You can reason about the memory used by your asynchronous processing by correlating that demand for N with the number of kilobytes an item consumes, W: you can then infer that at mostW*N memory will be consumed. In fact, Reactor will most of the time take advantage of knowing Nto apply optimizations: creating queues bounded accordingly and applying prefetching strategies where it can automatically request 75% of N every time that same ¾ amount has been received.

Finally, Reactor operators will sometimes change the backpressure signal to correlate it with the expectations and semantics they represent. One prime example of this behavior would bebuffer(10): for every request of N from downstream, that operator would request 10N from upstream, which represents enough data to fill the number of buffers the subscriber is ready to consume. This is called "active backpressure", and it can be put to good use by developers in order to explicitly tell Reactor how to switch from an input volume to a different output volume, in micro-batching scenarios for instance.

Relation to Spring

Reactor is the reactive foundation for the whole Spring ecosystem, and most notably Spring 5 (through Spring Web Reactive) and Spring Data "Kay" (which corresponds to spring-data-commons 2.0).

Having a reactive version for both of these projects is essential, in the sense that this enables us to write a web application that is reactive from start to finish: a request comes in, is asynchronously processed all the way down to and including the database, and results come back asynchronously as well. This allows a Spring application to be very efficient with resources, avoiding the usual pattern of dedicating a thread to a request and blocking it for I/O.

So Reactor is going to be used for the internal reactive plumbing of future Spring applications, as well as in the APIs these various Spring components expose. More generally, they'll be able to deal with RS Publishers, but most of the time these will happen to be Flux/Mono, bringing in the rich feature set of Reactor. Of course, you will be able to use your reactive library of choice, as the framework provides hooks for  adapting between Reactor types and RxJava types or even simpler RS types.

At the time of writing of this article, you can already experiment with Spring Web Reactive in Spring Boot by using Spring Boot 2.0.0.BUILD-SNAPSHOT and the spring-boot-starter-web-reactive dependency (eg. by generating such a project on start.spring.io):

<dependency><groupId>org.springframework.boot.experimental</groupId><artifactId>spring-boot-starter-web-reactive</artifactId>
</dependency>

This lets you write your @Controller mostly as usual, but replaces the underlying Spring MVC traditional layer with a reactive one, replacing many of the Spring MVC contracts by reactive non-blocking ones. By default, this reactive layer is based on top of Tomcat 8.5, but you can also elect to use Undertow or Netty.

Additionally, although Spring APIs are based on Reactor types, the Spring Web Reactive module lets you use various reactive types for both the request and response:

  • Mono<T>: as the @RequestBody, the request entity T is asynchronously deserialized and you can chain your processing to the resulting mono afterward. As the return type, once the Monoemits a value, the T is serialized asynchronously and sent back to the client. You can combine both approaches by augmenting the request Mono and returning that augmented chain as the resulting Mono.
  • Flux<T>: Used in streaming scenarios (including input streaming when used as @RequestBodyand Server Sent Events with a Flux<ServerSentEvent> return type)
  • Single/Observable: Same as Mono and Flux respectively, but switching to an RxJava implementation.
  • Mono<Void> as a return type: Request handling completes when the Mono completes.
  • Non-reactive return types (void and T): This now implies that your controller method is synchronous, but should be non-blocking (short-lived processing). The request handling finishes once the method is executed. The returned T is serialized back to the client asynchronously.

Here is a quick example of a plain text @Controller using the experimental web reactive module:

@RestController
public class ExampleController {private final MyReactiveLibrary reactiveLibrary;//Note Spring Boot 4.3+ autowires single constructors nowpublic ExampleController(MyReactiveLibrary reactiveLibrary) {this.reactiveLibrary = reactiveLibrary;}@GetMapping("hello/{who}")public Mono<String> hello(@PathVariable String who) {return Mono.just(who).map(w -> "Hello " + w + "!");}@GetMapping("helloDelay/{who}")public Mono<String> helloDelay(@PathVariable String who) {return reactiveLibrary.withDelay("Hello " + who + "!!", 2);}@PostMapping("heyMister")public Flux<String> hey(@RequestBody Mono<Sir> body) {return Mono.just("Hey mister ").concatWith(body.flatMap(sir -> Flux.fromArray(sir.getLastName().split(""))).map(String::toUpperCase).take(1)).concatWith(Mono.just(". how are you?"));}
}

The first endpoint takes a path variable, transforms it into a Mono<String> and maps that name to a greeting sentence that is returned to the client.

By doing a GET on /hello/Simon we get "Hello Simon!" as a text/plain response.

The second endpoint is a bit more complicated: it asynchronously receives a serialized Sirinstance (a class simply made up of a firstName and lastName attributes) and flatMaps it into a stream of the last name's letters. It then takes the first of these letters, maps it to upper case andconcatenates it into a greeting sentence.

So POSTing the following JSON object to /heyMister

{"firstName": "Paul","lastName": "tEsT"
}

Returns the string "Hello mister T. How are you?".

The reactive aspect of Spring Data is also currently being developed in the Kay release train, which for spring-data-commons is the 2.0.x branch. There is a first Milestone out that you can get by adding the Spring Data Kay-M1 bom to your pom:

<dependencyManagement><dependencies><dependency><groupId>org.springframework.data</groupId><artifactId>spring-data-releasetrain</artifactId><version>Kay-M1</version><scope>import</scope><type>pom</type></dependency></dependencies>
</dependencyManagement>

Then for this simplistic example just add the Spring Data Commons dependency in your pom (it will take the version from the BOM above):

<dependency><groupId>org.springframework.data</groupId><artifactId>spring-data-commons</artifactId>
</dependency>

Reactive support in Spring Data revolves around the new ReactiveCrudRepository<T, ID>interface, which extends Repository<T, ID>. This interface exposes CRUD methods, using Reactor input and return types. There is also an RxJava 1 based version calledRxJava1CrudRepository. For instance, in the classical blocking CrudRepository, retrieving one entity by its id would be done using "T findOne(ID id)". It becomes "Mono<T> findOne(ID id)" and "Observable<T> findOne(ID id)" in ReactiveCrudRepository and RxJava1CrudRepositoryrespectively. There are even variants that take a Mono/Single as argument, to asynchronously provide the key and compose on that.

Assuming a reactive backing store (or a mock ReactiveCrudRepository bean), the following (very naive) controller would be reactive from start to finish:

@RestController
public class DataExampleController {private final ReactiveCrudRepository<Sir, String> reactiveRepository;//Note Spring Boot 4.3+ autowires single constructors nowpublic DataExampleController(ReactiveCrudRepository<Sir, String> repo) {this.reactiveRepository = repo;}@GetMapping("data/{who}")public Mono<ResponseEntity<Sir>> hello(@PathVariable String who) {return reactiveRepository.findOne(who).map(ResponseEntity::ok).defaultIfEmpty(ResponseEntity.status(404).body(null));}
}

Notice how the data repository usage naturally flows into the response path: we asynchronously fetch the entity and wrap it as a ResponseEntity using map, obtaining a Mono we can return right away. If the Spring Data repository cannot find data for this key, it will return an empty Mono. We make that explicit by using defaultIfEmpty and returning a 404.

Testing Reactor

The article "Testing RxJava" covered techniques for testing an Observable. As we saw, RxJava comes with a TestScheduler that you can use with operators that accept a Scheduler as a parameter, to manipulate a virtual clock on these operators. It also features a TestSubscriberclass that can be leveraged to wait for the completion of an Observable and to make assertions about every event (number and values for onNext, has onError triggered, etc.) In RxJava 2, theTestSubscriber is an RS Subscriber, so you can test Reactor's Flux and Mono with it!

In Reactor, these two broad features are combined into the StepVerifier class. It can be found in the addon module reactor-test from the reactor-addons repository. The StepVerifier can be initialized by creating an instance from any Publisher, using the StepVerifier.create builder. If you want to use virtual time, you can use the StepVerifier.withVirtualTime builder, which takes a Supplier<Publisher>. The reason for this is that it will first ensure that aVirtualTimeScheduler is created and enabled as the default Scheduler implementation to use, making the need to explicitly pass the scheduler to operators obsolete. The StepVerifier will then configure if necessary the Flux/Mono created within the Supplier, turning timed operators into "virtually timed operator". You can then script stream expectations and time progress: what the next elements should be, should there be an error, should it move forward in time, etc. Other methods include verifying that data matches a given Predicate or even consume onNext events, allowing you to do more advanced interactions with the value (like using an assertion library). Any AssertionError thrown by one of these will be reflected back in the final verification result. Finally, call verify() to check your expectations, this will truly subscribe to the defined source via StepVerifier.create or StepVerifier.withVirtualTime.

Let's take a few simple examples and demonstrate how StepVerifier works. For these snippets, you'll want to add the following test dependencies to your pom:

<dependency><groupId>io.projectreactor.addons</groupId><artifactId>reactor-test</artifactId><version>3.0.3.RELEASE</version><scope>test</scope>
</dependency><dependency><groupId>org.assertj</groupId><artifactId>assertj-core</artifactId><version>3.5.2</version><scope>test</scope>
</dependency>

First, imagine you have reactive class called MyReactiveLibrary that produces a few Flux that you want to test:

@Component
public class MyReactiveLibrary {public Flux<String> alphabet5(char from) {return Flux.range((int) from, 5).map(i -> "" + (char) i.intValue());}public Mono<String> withDelay(String value, int delaySeconds) {return Mono.just(value).delaySubscription(Duration.ofSeconds(delaySeconds));}
}

The first method is intended to return the 5 letters of the alphabet following (and including) the given starting letter. The second method returns a flux that emits a given value after a given delay, in seconds.

The first test we'd like to write ensures that calling alphabet5 from x limits the output to x, y, z. With StepVerifier it would go like this:

@Test
public void testAlphabet5LimitsToZ() {MyReactiveLibrary library = new MyReactiveLibrary();StepVerifier.create(library.alphabet5('x')).expectNext("x", "y", "z").expectComplete().verify();
}

The second test we'd like to run on alphabet5 is that every returned value is an alphabetical character. For that we'd like to use a rich assertion library like AssertJ:

@Test
public void testAlphabet5LastItemIsAlphabeticalChar() {MyReactiveLibrary library = new MyReactiveLibrary();StepVerifier.create(library.alphabet5('x')).consumeNextWith(c -> assertThat(c).as("first is alphabetic").matches("[a-z]")).consumeNextWith(c -> assertThat(c).as("second is alphabetic").matches("[a-z]")).consumeNextWith(c -> assertThat(c).as("third is alphabetic").matches("[a-z]")).consumeNextWith(c -> assertThat(c).as("fourth is alphabetic").matches("[a-z]")).expectComplete().verify();
}

Turns out both of these tests fail :(. Let's have a look at the output the StepVerifier gives us in each case to see if we can spot the bug:

java.lang.AssertionError: expected: onComplete(); actual: onNext({)

and

java.lang.AssertionError: [fourth is alphabetic] 
Expecting:"{"
to match pattern:"[a-z]"

So it looks like our method doesn't stop at z but continues emitting characters from the ASCII range. We could fix that by adding a .take(Math.min(5, 'z' - from + 1)) for instance, or using the same Math.min as the second argument to range.

The last test we want to make involves virtual time manipulation: we'll test the delaying method but without actually waiting for the given amount of seconds, by using the withVirtualTimebuilder:

@Test
public void testWithDelay() {MyReactiveLibrary library = new MyReactiveLibrary();Duration testDuration =StepVerifier.withVirtualTime(() -> library.withDelay("foo", 30)).expectSubscription().thenAwait(Duration.ofSeconds(10)).expectNoEvent(Duration.ofSeconds(10)).thenAwait(Duration.ofSeconds(10)).expectNext("foo").expectComplete().verify();System.out.println(testDuration.toMillis() + "ms");
}

This tests a flux that would be delayed by 30 seconds for the following scenario: an immediate subscription, followed by 3x10s where nothing happens, then an onNext("foo") and completion.

The System.out output prints the actual duration the verification took, which in my latest run was 8ms :)

Note that when using the create builder instead, the thenAwait and expectNoEvent methods would still be available but would actually block for the provided duration.

StepVerifier comes with many more methods for describing expectations and asserting state of a Publisher (and if you think about new ones, contributions and feedback are always welcome in the github repository).

Custom Hot Source

Note that the concept of hot and cold observables discussed at the end of "RxJava by Example" also applies to Reactor.

If you want to create a custom Flux, instead of the RxJava AsyncEmitter class, you'd use Reactor's FluxSink. This will cover all the asynchronous corner cases for you and let you focus on emitting your values.

Use Flux.create and get a FluxSink in the callback that you can use to emit data via next. This custom Flux can be cold, so in order to make it hot you can use publish() and connect(). Building on the example from the previous article with a feed of price ticks, we get an almost verbatim translation in Reactor:

SomeFeed<PriceTick> feed = new SomeFeed<>();
Flux<PriceTick> flux =Flux.create(emitter ->{SomeListener listener = new SomeListener() {@Overridepublic void priceTick(PriceTick event) {emitter.next(event);if (event.isLast()) {emitter.complete();}}@Overridepublic void error(Throwable e) {emitter.error(e);}};feed.register(listener);}, FluxSink.OverflowStrategy.BUFFER);ConnectableFlux<PriceTick> hot = flux.publish();

Before connecting to the hot Flux, why not subscribe twice?  One subscription will print the detail of each tick while the other will only print the instrument:

hot.subscribe(priceTick -> System.out.printf("%s %4s %6.2f%n", priceTick.getDate(), priceTick.getInstrument(), priceTick.getPrice()));hot.subscribe(priceTick -> System.out.println(priceTick.getInstrument()));

We then connect to the hot flux and let it run for 5 seconds before our test snippet terminates:

hot.connect();
Thread.sleep(5000);

(note that in the example repository, the feed would also terminate on its own if the isLast()method of PriceTick is changed).

FluxSink also lets you check if downstream has cancelled its subscription via isCancelled(). You can also get feedback on the outstanding requested amount viarequestedFromDownstream(), which is useful if you want to simply comply with backpressure. Finally, you can make sure any specific resources your source uses are released uponCancellation via setCancellation.

Note that there's a backpressure implication of using FluxSink: you must provide anOverflowStrategy explicitly to let the operator deal with backpressure. This is equivalent to usingonBackpressureXXX operators (eg. FluxSink.OverflowStrategy.BUFFER is equivalent to using.onBackpressureBuffer()), which kind of overrides any backpressure instructions from downstream.

Conclusion

In this article, you have learned about Reactor, a fourth-generation reactive library that builds on the Rx language but targets Java 8 and the Reactive Streams specification. We've shown how the concepts you might have learned in RxJava also apply to Reactor, despite a few API differences. We've also shown how Reactor serves as the foundation for Spring 5, and that it offers resources for testing a Publisher/Flux/Mono.

If you want to dig deeper into using Reactor, the snippets presented in this article are available in our github repository. There is also a workshop, the "Lite Rx API hands-on", that covers more operators and use cases.

Finally, you can reach the Reactor team on Gitter and provide feedback there or through github issues (and of course, pull-requests are welcomed as well).

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/542063.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

springboot项目后台运行关闭_springboot项目在服务器上部署过程(新手教程)

环境&#xff1a;服务器系统&#xff1a;ubuntu16jdkmysql工具 xshell6下载地址&#xff1a;https://www.netsarang.com/download/down_form.html?code622&downloadType0&licenseType1xftp6下载地址&#xff1a;https://www.netsarang.com/download/down_form.html?c…

如何在React Native中使用文本输入组件?

You know, an app becomes more authentic and professional when there is the interaction between the app and the user. 您知道&#xff0c;当应用程序与用户之间存在交互时&#xff0c;该应用程序将变得更加真实和专业。 The text input component in react-native brin…

lvs负载均衡—NAT模式

NAT模式原理图 Virtual Server via NAT &#xff1a; 用地址翻译实现虚拟服务器,地址转换器有能被外界访问到的合法IP地址,它修改来自专有网络的流出包的地址,外界看起来包是来自地址转换器本身,当外界包送到转换器时,它能判断出应该将包送到内部网的哪个节点。 优点是节省IP …

Django1.9开发博客06- 模板继承

模板继承就是网站的多个页面可以共享同一个页面布局或者是页面的某几个部分的内容。通过这种方式你就需要在每个页面复制粘贴同样的代码了。 如果你想改变页面某个公共部分&#xff0c;你不需要每个页面的去修改&#xff0c;只需要修改一个模板就行了&#xff0c;这样最大化复用…

lvs负载均衡—高可用集群(keepalived)

基本概念&#xff1a; 什么是Keepalived呢&#xff0c;keepalived观其名可知&#xff0c;保持存活&#xff0c;在网络里面就是保持在线了&#xff0c;也就是所谓的高可用或热备&#xff0c;用来防止单点故障(单点故障是指一旦某一点出现故障就会导致整个系统架构的不可用)的发…

定期定量采购_企业常见的六种采购策略

注册职业采购经理CPPM考试网​www.apscppm.com对不起&#xff0c;我是采购合同生效的条件是什么&#xff1f;怎样制定谈判方案&#xff1f;如何在采购时让供应商听你的指挥&#xff01;没做预算不能采购&#xff0c;应该作为企业采购管理的基本原则。编制现金预算就是要解决收入…

pacemaker+corosync实现集群管理

前言: 高可用集群&#xff0c;是指以减少服务中断&#xff08;如因服务器宕机等引起的服务中断&#xff09;时间为目的的服务器集群技术。简单的说&#xff0c;集群就是一组计算机&#xff0c;它们作为一个整体向用户提供一组网络资源。这些单个的计算机系统就是集群的节点。 …

更换mysql_这些被你忽视的MySQL细节,可能会让你丢饭碗!

我们在 MySQL 入门篇主要介绍了基本的 SQL 命令、数据类型和函数&#xff0c;在具备以上知识后&#xff0c;你就可以进行 MySQL 的开发工作了&#xff0c;但是如果要成为一个合格的开发人员&#xff0c;你还要具备一些更高级的技能&#xff0c;下面我们就来探讨一下 MySQL 都需…

rhcs集群套件—红帽6的高可用

含义及理解&#xff1a; RHCS是Red Hat Cluster Suite的缩写&#xff0c;也就是红帽子集群套件&#xff0c;RHCS是一个能够提供高可用性、高可靠性、负载均衡、存储共享且经济廉价的集群工具集合&#xff0c;&#xff0c;它将集群系统中三大集群架构&#xff08;高可用性集群、…

MapReduce二次排序

2019独角兽企业重金招聘Python工程师标准>>> 默认情况下&#xff0c;Map输出的结果会对Key进行默认的排序&#xff0c;但是有时候需要对Key排序的同时还需要对Value进行排序&#xff0c;这时候就要用到二次排序了。下面我们来说说二次排序 1、二次排序原理 我们把二…

数据有序_详解数据库插入性能优化:合并+事务+有序数据进行INSERT操作

概述对于一些数据量较大的系统&#xff0c;数据库面临的问题除了查询效率低下&#xff0c;还有就是数据入库时间长。特别像报表系统&#xff0c;每天花费在数据导入上的时间可能会长达几个小时或十几个小时之久。因此&#xff0c;优化数据库插入性能是很有意义的。其实最有效的…

容器内应用日志收集方案

容器化应用日志收集挑战 应用日志的收集、分析和监控是日常运维工作重要的部分&#xff0c;妥善地处理应用日志收集往往是应用容器化重要的一个课题。 Docker处理日志的方法是通过docker engine捕捉每一个容器进程的STDOUT和STDERR&#xff0c;通过为contrainer制定不同log dri…

python统计行号_利用Python进行数据分析(第三篇上)

上一篇文章我记录了自己在入门 Python 学习的一些基础内容以及实际操作代码时所碰到的一些问题。这篇我将会记录我在学习和运用 Python 进行数据分析的过程&#xff1a;介绍 Numpy 和 Pandas 两个包运用 Numpy 和 Pandas 分析一维、二维数据数据分析的基本过程实战项目【用 Pyt…

lnmp架构搭建—源码编译(nginx、mysql、php)

含义及理解&#xff1a; LNMP LinuxNginxMysqlPHP&#xff1a;LNMP是指一组通常一起使用来运行动态网站或者服务器的自由软件名称首字母缩写。L指Linux&#xff0c;N指Nginx&#xff0c;M一般指MySQL&#xff0c;也可以指MariaDB&#xff0c;P一般指PHP&#xff0c;也可以指P…

解析xml_Mybatis中mapper的xml解析详解

上一篇文章分析了mapper注解关键类MapperAnnotationBuilder&#xff0c;今天来看mapper的项目了解析关键类XMLMapperBuilder。基础介绍回顾下之前是在分析configuration的初始化过程&#xff0c;已经进行到了最后一步mapperElement(root.evalNode("mappers"))&#x…

lnmp—MemCache的作用

含义及理解&#xff1a; 1 . memcache是一个高性能的分布式的内存对象缓存系统&#xff0c;用于动态web应用以减轻数据库负担。通过在内存里维护一个统一的巨大的hash表&#xff0c;来存储经常被读写的一些数组与文件&#xff0c;从而极大的提高网站的运行效率。 memcache是一…

openresty—实现缓存前移

含义及理解&#xff1a; OpenResty(又称&#xff1a;ngx_openresty) 是一个基于 NGINX 的可伸缩的 Web 平台&#xff0c;由中国人章亦春发起&#xff0c;提供了很多高质量的第三方模块。 其目标是让Web服务直接跑在Nginx服务内部&#xff0c;充分利用Nginx的非阻塞I/O模型&am…

Nginx+Keepalived+Tomcat之动静分离的web集群

NginxKeepalivedTomcat之动静分离的web集群 博客分类&#xff1a; webserverNginxKeepalivedTomcat之动静分离的web集群为小公司提供大概一天持续在100万/日之间访问的高性能、高可用、高并发访问及动静分离的web集群方案NginxKeepalived 高可用、反向代理NginxPHP …

安装完成后的配置_cent os7 默认安装后的一般配置

在安装cent os7后&#xff0c;进入系统会出现一些命令无法执行。这是因为最小化没有安装包含的软件包。这时候先要配置一下基本的IP参数&#xff0c;(包括动态&#xff0c;静态&#xff0c;或者是双网卡绑定)。我们在虚拟机中模拟操作一下&#xff0c;配置文件在/etc/sysconfig…

lnmp构架——对tomcat详解

tomcat的安装部署 安装jdk和tomcat tar zxf jdk-7u79-linux-x64.tar.gz -C /usr/local/ tar zxf apache-tomcat-7.0.37.tar.gz -C /usr/local/做好软连接便于访问 cd /usr/local ln -s jdk1.7.0_79/ java ln -s apache-tomcat-7.0.37/ tomcat配置环境变量 vim /etc/profile…